Dec 09 14:22:47 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 14:22:47 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:47 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 14:22:48 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 09 14:22:48 crc kubenswrapper[4770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:22:48 crc kubenswrapper[4770]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 14:22:48 crc kubenswrapper[4770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:22:48 crc kubenswrapper[4770]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:22:48 crc kubenswrapper[4770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 09 14:22:48 crc kubenswrapper[4770]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.296242 4770 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301524 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301554 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301562 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301572 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301579 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301588 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301598 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301606 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301614 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301621 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301628 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301642 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301648 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301654 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301661 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301667 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301674 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301683 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301692 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301699 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301706 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301713 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301720 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301752 4770 feature_gate.go:330] unrecognized feature gate: Example Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301760 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301767 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301773 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301780 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301787 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301796 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301804 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301811 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301818 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301833 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301839 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301848 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301855 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301861 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301867 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301874 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301882 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301889 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301895 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301902 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301908 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301915 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301922 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301929 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301936 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301942 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301949 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301956 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301962 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301970 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301978 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301984 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301991 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.301997 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302003 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302010 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302016 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302023 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302031 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302038 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302044 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302050 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302057 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302063 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302072 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302078 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.302085 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302437 4770 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302461 4770 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302477 4770 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302487 4770 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302497 4770 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302505 4770 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302515 4770 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302525 4770 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302532 4770 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302540 4770 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302549 4770 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302557 4770 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302565 4770 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302573 4770 flags.go:64] FLAG: --cgroup-root="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302580 4770 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302587 4770 flags.go:64] FLAG: --client-ca-file="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302595 4770 flags.go:64] FLAG: --cloud-config="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302602 4770 flags.go:64] FLAG: --cloud-provider="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302609 4770 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302618 4770 flags.go:64] FLAG: --cluster-domain="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302625 4770 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302633 4770 flags.go:64] FLAG: --config-dir="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302640 4770 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302648 4770 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302658 4770 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302665 4770 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302673 4770 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302683 4770 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302691 4770 flags.go:64] FLAG: --contention-profiling="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302698 4770 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302706 4770 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302716 4770 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302747 4770 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302758 4770 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302766 4770 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302773 4770 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302780 4770 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302788 4770 flags.go:64] FLAG: --enable-server="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302796 4770 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302809 4770 flags.go:64] FLAG: --event-burst="100" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302817 4770 flags.go:64] FLAG: --event-qps="50" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302824 4770 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302833 4770 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302840 4770 flags.go:64] FLAG: --eviction-hard="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302850 4770 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302857 4770 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302864 4770 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302872 4770 flags.go:64] FLAG: --eviction-soft="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302880 4770 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302888 4770 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302895 4770 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302902 4770 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302910 4770 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302917 4770 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302925 4770 flags.go:64] FLAG: --feature-gates="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302935 4770 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302943 4770 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302950 4770 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302958 4770 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302966 4770 flags.go:64] FLAG: --healthz-port="10248" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302973 4770 flags.go:64] FLAG: --help="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302981 4770 flags.go:64] FLAG: --hostname-override="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302988 4770 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.302997 4770 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303005 4770 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303013 4770 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303021 4770 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303029 4770 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303036 4770 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303044 4770 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303051 4770 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303059 4770 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303067 4770 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303074 4770 flags.go:64] FLAG: --kube-reserved="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303082 4770 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303090 4770 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303098 4770 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303105 4770 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303113 4770 flags.go:64] FLAG: --lock-file="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303121 4770 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303128 4770 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303136 4770 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303147 4770 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303155 4770 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303162 4770 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303170 4770 flags.go:64] FLAG: --logging-format="text" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303178 4770 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303186 4770 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303193 4770 flags.go:64] FLAG: --manifest-url="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303201 4770 flags.go:64] FLAG: --manifest-url-header="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303211 4770 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303219 4770 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303234 4770 flags.go:64] FLAG: --max-pods="110" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303242 4770 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303250 4770 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303257 4770 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303265 4770 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303274 4770 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303281 4770 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303290 4770 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303308 4770 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303316 4770 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303323 4770 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303331 4770 flags.go:64] FLAG: --pod-cidr="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303338 4770 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303351 4770 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303359 4770 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303367 4770 flags.go:64] FLAG: --pods-per-core="0" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303374 4770 flags.go:64] FLAG: --port="10250" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303382 4770 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303390 4770 flags.go:64] FLAG: --provider-id="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303398 4770 flags.go:64] FLAG: --qos-reserved="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303406 4770 flags.go:64] FLAG: --read-only-port="10255" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303413 4770 flags.go:64] FLAG: --register-node="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303421 4770 flags.go:64] FLAG: --register-schedulable="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303429 4770 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303442 4770 flags.go:64] FLAG: --registry-burst="10" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303451 4770 flags.go:64] FLAG: --registry-qps="5" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303460 4770 flags.go:64] FLAG: --reserved-cpus="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303467 4770 flags.go:64] FLAG: --reserved-memory="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303477 4770 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303485 4770 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303493 4770 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303500 4770 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303508 4770 flags.go:64] FLAG: --runonce="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303516 4770 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303524 4770 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303532 4770 flags.go:64] FLAG: --seccomp-default="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303539 4770 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303547 4770 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303560 4770 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303568 4770 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303576 4770 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303583 4770 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303590 4770 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303599 4770 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303607 4770 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303615 4770 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303623 4770 flags.go:64] FLAG: --system-cgroups="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303630 4770 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303643 4770 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303651 4770 flags.go:64] FLAG: --tls-cert-file="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303659 4770 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303668 4770 flags.go:64] FLAG: --tls-min-version="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303675 4770 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303682 4770 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303690 4770 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303697 4770 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303706 4770 flags.go:64] FLAG: --v="2" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303716 4770 flags.go:64] FLAG: --version="false" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303748 4770 flags.go:64] FLAG: --vmodule="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303758 4770 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.303766 4770 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.305865 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306213 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306237 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306250 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306262 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306273 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306285 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306294 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306305 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306315 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306325 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306335 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306344 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306355 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306364 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306375 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306385 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306395 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306406 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306415 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306425 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306434 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306443 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306453 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306462 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306472 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306485 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306501 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306511 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306522 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306533 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306543 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306555 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306584 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306596 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306608 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306621 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306632 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306642 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306651 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306659 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306668 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306677 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306684 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306692 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306701 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306708 4770 feature_gate.go:330] unrecognized feature gate: Example Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306717 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306764 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306773 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306781 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306790 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306801 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306811 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306825 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306838 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306848 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306857 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306867 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306875 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306884 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306893 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306901 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306912 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306922 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306934 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306943 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306951 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306959 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306967 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.306975 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.306991 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.315521 4770 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.315561 4770 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315645 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315653 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315658 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315664 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315669 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315676 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315681 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315686 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315693 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315700 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315705 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315710 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315715 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315721 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315744 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315749 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315754 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315758 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315763 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315767 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315772 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315776 4770 feature_gate.go:330] unrecognized feature gate: Example Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315781 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315785 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315790 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315796 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315801 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315805 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315811 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315815 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315820 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315826 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315830 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315835 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315840 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315845 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315850 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315854 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315859 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315864 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315869 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315874 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315878 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315883 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315887 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315892 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315896 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315901 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315906 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315910 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315915 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315919 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315923 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315928 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315932 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315939 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315945 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315951 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315957 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315963 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315968 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315974 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315978 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315984 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315988 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315993 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.315997 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316002 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316007 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316012 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316017 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.316025 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316172 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316184 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316190 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316196 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316202 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316209 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316214 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316219 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316224 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316229 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316233 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316238 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316243 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316248 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316252 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316257 4770 feature_gate.go:330] unrecognized feature gate: Example Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316262 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316266 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316270 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316276 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316280 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316286 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316290 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316295 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316301 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316305 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316310 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316315 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316319 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316324 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316329 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316333 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316337 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316342 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316346 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316351 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316355 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316361 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316366 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316370 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316376 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316381 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316386 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316391 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316396 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316401 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316406 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316411 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316416 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316421 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316431 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316436 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316441 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316446 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316450 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316455 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316460 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316465 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316469 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316474 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316478 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316482 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316487 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316492 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316496 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316501 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316505 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316510 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316516 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316520 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.316525 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.316532 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.316745 4770 server.go:940] "Client rotation is on, will bootstrap in background" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.319763 4770 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.319863 4770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.320512 4770 server.go:997] "Starting client certificate rotation" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.320531 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.320761 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-18 18:36:44.918306133 +0000 UTC Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.320898 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 220h13m56.597414262s for next certificate rotation Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.332719 4770 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.336711 4770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.361595 4770 log.go:25] "Validated CRI v1 runtime API" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.388810 4770 log.go:25] "Validated CRI v1 image API" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.391172 4770 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.395938 4770 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-09-14-18-40-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.396010 4770 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.416760 4770 manager.go:217] Machine: {Timestamp:2025-12-09 14:22:48.415506038 +0000 UTC m=+0.311708194 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:139bdcda-380d-487b-979e-5b5b7a0626d7 BootID:a2f2a625-8e35-4260-bf8e-bdf5a38e558e Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:83:6b:8b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:83:6b:8b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:aa:0a:26 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6b:44:db Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b2:40:7a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:eb:36:1f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:7c:0e:a3:6a:eb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0e:f2:76:23:32:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.416985 4770 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.417139 4770 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.417440 4770 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.417600 4770 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.417638 4770 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.417846 4770 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.417856 4770 container_manager_linux.go:303] "Creating device plugin manager" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.418022 4770 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.418057 4770 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.418361 4770 state_mem.go:36] "Initialized new in-memory state store" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.418458 4770 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.457570 4770 kubelet.go:418] "Attempting to sync node with API server" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.457621 4770 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.457656 4770 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.457670 4770 kubelet.go:324] "Adding apiserver pod source" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.457694 4770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.459460 4770 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.473229 4770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.486943 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.487096 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.493246 4770 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.493350 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.493447 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494207 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494268 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494285 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494298 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494320 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494335 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494349 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494370 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494388 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494402 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494430 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494448 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.494790 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.495609 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.495648 4770 server.go:1280] "Started kubelet" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.495949 4770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.496267 4770 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.496697 4770 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 14:22:48 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.497629 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.497662 4770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.497756 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:54:29.897318029 +0000 UTC Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.497800 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 266h31m41.399521931s for next certificate rotation Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.497864 4770 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.497893 4770 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.498121 4770 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.499250 4770 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.499309 4770 server.go:460] "Adding debug handlers to kubelet server" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.498355 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f920e2aab155d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:22:48.495576413 +0000 UTC m=+0.391778589,LastTimestamp:2025-12-09 14:22:48.495576413 +0000 UTC m=+0.391778589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.505805 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="200ms" Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.507110 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.507399 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.507913 4770 factory.go:153] Registering CRI-O factory Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.507969 4770 factory.go:221] Registration of the crio container factory successfully Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.508062 4770 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.529906 4770 factory.go:55] Registering systemd factory Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.529933 4770 factory.go:221] Registration of the systemd container factory successfully Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.529981 4770 factory.go:103] Registering Raw factory Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.530022 4770 manager.go:1196] Started watching for new ooms in manager Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531085 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531134 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531149 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531163 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531175 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531188 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531199 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531211 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531226 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531231 4770 manager.go:319] Starting recovery of all containers Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531239 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531252 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531264 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531276 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531319 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531332 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531345 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531357 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531370 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531386 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531397 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531411 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531423 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531440 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531458 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531473 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531486 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531506 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531524 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531540 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531555 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531570 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531584 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531598 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531617 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531633 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531650 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531667 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531684 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531700 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531711 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531750 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531771 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531836 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531854 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531870 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531888 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531905 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531922 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531938 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531956 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531971 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.531996 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532016 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532028 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532044 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532058 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532075 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532091 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532143 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532165 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532181 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532200 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532217 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532234 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532253 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532270 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532286 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532301 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532316 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532330 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532342 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532355 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532367 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532381 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532394 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532408 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532453 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532465 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532478 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532490 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532501 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532552 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532589 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532603 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532619 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532635 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532657 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532671 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532696 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532714 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532754 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532769 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532782 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532795 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532809 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532822 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532838 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532852 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532875 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532889 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532904 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532919 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532935 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.532950 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533013 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533034 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533050 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533066 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533081 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533098 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533115 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533130 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533145 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533160 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533175 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533189 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533203 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533218 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533238 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533255 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533269 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533283 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533298 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533312 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533326 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533341 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533356 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533369 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533386 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533401 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533416 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533431 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533447 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533462 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533476 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533512 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533527 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533542 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533558 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533571 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533585 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533599 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533614 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533630 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533645 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533660 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533673 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533687 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533701 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533715 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533770 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533784 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533798 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533812 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533826 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533839 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533854 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533870 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533884 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533898 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533913 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533928 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533945 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533958 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533972 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.533984 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534000 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534016 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534031 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534045 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534061 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534075 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534090 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534104 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534118 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534133 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534148 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534161 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534176 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534190 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534203 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534216 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534234 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534248 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534261 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534274 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534286 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534298 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534312 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534325 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534342 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534356 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534369 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534382 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534399 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534411 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534424 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.534437 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535222 4770 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535249 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535266 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535280 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535294 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535307 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535321 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535336 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535349 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535363 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535376 4770 reconstruct.go:97] "Volume reconstruction finished" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.535385 4770 reconciler.go:26] "Reconciler: start to sync state" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.544987 4770 manager.go:324] Recovery completed Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.555902 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.558282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.558615 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.559208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.560422 4770 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.560442 4770 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.560466 4770 state_mem.go:36] "Initialized new in-memory state store" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.584841 4770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.586933 4770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.586986 4770 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.587016 4770 kubelet.go:2335] "Starting kubelet main sync loop" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.587089 4770 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 14:22:48 crc kubenswrapper[4770]: W1209 14:22:48.588090 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.588163 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.599324 4770 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.663857 4770 policy_none.go:49] "None policy: Start" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.664865 4770 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.664904 4770 state_mem.go:35] "Initializing new in-memory state store" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.687474 4770 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.699444 4770 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.706642 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="400ms" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.720944 4770 manager.go:334] "Starting Device Plugin manager" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.721011 4770 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.721027 4770 server.go:79] "Starting device plugin registration server" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.721485 4770 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.721509 4770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.721744 4770 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.721831 4770 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.721843 4770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.728345 4770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.822688 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.824415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.824477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.824490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.824517 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 14:22:48 crc kubenswrapper[4770]: E1209 14:22:48.825377 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.888682 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.888876 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.891091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.891176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.891195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.891533 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.891673 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.891802 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.893397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.893437 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.893484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.893702 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.894656 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.894779 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.895624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.895659 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.895702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.897156 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.897222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.897237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.897650 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.898133 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.898195 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.898918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.898956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.898972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.899495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.899573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.899590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.900293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.900324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.900332 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.900496 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.900677 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.900791 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.901613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.901637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.901646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.901833 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.901867 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.902181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.902232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.902253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.902668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.902689 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.902699 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945420 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945510 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945533 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945557 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945578 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945599 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945622 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945647 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945694 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945762 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:48 crc kubenswrapper[4770]: I1209 14:22:48.945784 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.026109 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.027625 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.027690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.027710 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.027802 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 14:22:49 crc kubenswrapper[4770]: E1209 14:22:49.028497 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.046766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.046826 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.046863 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.046903 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.046942 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.046975 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047006 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047025 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047062 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047100 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047039 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047144 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047199 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047148 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047263 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047226 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047328 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047335 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047369 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047383 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047433 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047468 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047475 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047548 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047641 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047800 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.047876 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: E1209 14:22:49.107972 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="800ms" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.148419 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.148519 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.148557 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.148811 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.228940 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.244356 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.267178 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3dea55e2906b30341609e03bb74d79a48df3be8b94eb5d60cc023f2ab4efb20c WatchSource:0}: Error finding container 3dea55e2906b30341609e03bb74d79a48df3be8b94eb5d60cc023f2ab4efb20c: Status 404 returned error can't find the container with id 3dea55e2906b30341609e03bb74d79a48df3be8b94eb5d60cc023f2ab4efb20c Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.269572 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2f4b964014d0428ebfd39eb93d0bdf9539cf1e44a14cb4b5b32440b2781730e5 WatchSource:0}: Error finding container 2f4b964014d0428ebfd39eb93d0bdf9539cf1e44a14cb4b5b32440b2781730e5: Status 404 returned error can't find the container with id 2f4b964014d0428ebfd39eb93d0bdf9539cf1e44a14cb4b5b32440b2781730e5 Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.275387 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.286658 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.305860 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-dec0adce4d52c61c9e597721490918f58b90926a334b32a47c4d81c2769ecd32 WatchSource:0}: Error finding container dec0adce4d52c61c9e597721490918f58b90926a334b32a47c4d81c2769ecd32: Status 404 returned error can't find the container with id dec0adce4d52c61c9e597721490918f58b90926a334b32a47c4d81c2769ecd32 Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.307031 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-aab30e830221437d9cc228b2698d5ee15e0e690bc069fc169a58e5abd5ed2ad6 WatchSource:0}: Error finding container aab30e830221437d9cc228b2698d5ee15e0e690bc069fc169a58e5abd5ed2ad6: Status 404 returned error can't find the container with id aab30e830221437d9cc228b2698d5ee15e0e690bc069fc169a58e5abd5ed2ad6 Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.310479 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.327635 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3c3f7dcf4a1ab51c04815f6b7c541283c00f79865be69b095fb115da27fb0d1b WatchSource:0}: Error finding container 3c3f7dcf4a1ab51c04815f6b7c541283c00f79865be69b095fb115da27fb0d1b: Status 404 returned error can't find the container with id 3c3f7dcf4a1ab51c04815f6b7c541283c00f79865be69b095fb115da27fb0d1b Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.406861 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:49 crc kubenswrapper[4770]: E1209 14:22:49.407000 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.429251 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.430254 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.430312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.430326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.430348 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 14:22:49 crc kubenswrapper[4770]: E1209 14:22:49.430881 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.496483 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.591893 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3c3f7dcf4a1ab51c04815f6b7c541283c00f79865be69b095fb115da27fb0d1b"} Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.593865 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aab30e830221437d9cc228b2698d5ee15e0e690bc069fc169a58e5abd5ed2ad6"} Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.596097 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dec0adce4d52c61c9e597721490918f58b90926a334b32a47c4d81c2769ecd32"} Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.596777 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2f4b964014d0428ebfd39eb93d0bdf9539cf1e44a14cb4b5b32440b2781730e5"} Dec 09 14:22:49 crc kubenswrapper[4770]: I1209 14:22:49.597289 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3dea55e2906b30341609e03bb74d79a48df3be8b94eb5d60cc023f2ab4efb20c"} Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.653637 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:49 crc kubenswrapper[4770]: E1209 14:22:49.653704 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.792228 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:49 crc kubenswrapper[4770]: E1209 14:22:49.792633 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:49 crc kubenswrapper[4770]: W1209 14:22:49.846028 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:49 crc kubenswrapper[4770]: E1209 14:22:49.846120 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:49 crc kubenswrapper[4770]: E1209 14:22:49.909199 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="1.6s" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.231784 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.233360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.233397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.233422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.233450 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 14:22:50 crc kubenswrapper[4770]: E1209 14:22:50.234088 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Dec 09 14:22:50 crc kubenswrapper[4770]: E1209 14:22:50.310818 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f920e2aab155d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:22:48.495576413 +0000 UTC m=+0.391778589,LastTimestamp:2025-12-09 14:22:48.495576413 +0000 UTC m=+0.391778589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.497221 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.600642 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a" exitCode=0 Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.600704 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a"} Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.600815 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.601903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.601946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.601962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.602523 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99" exitCode=0 Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.602592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99"} Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.602712 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.603441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.603482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.603498 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.604678 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.604909 4770 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b480ca7c89cca43c9d624155c406331bbb6d9ba368151f43b6f1936070cbed63" exitCode=0 Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.604960 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b480ca7c89cca43c9d624155c406331bbb6d9ba368151f43b6f1936070cbed63"} Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.605043 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.611478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.611517 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.611535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.611682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.611739 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.611753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.613304 4770 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee" exitCode=0 Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.613485 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.614207 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee"} Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.614635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.614676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.614693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.617123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa"} Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.617152 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26"} Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.617162 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961"} Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.617172 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e"} Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.617224 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.618682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.618764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:50 crc kubenswrapper[4770]: I1209 14:22:50.618783 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:51 crc kubenswrapper[4770]: W1209 14:22:51.127787 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:51 crc kubenswrapper[4770]: E1209 14:22:51.127892 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.497177 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Dec 09 14:22:51 crc kubenswrapper[4770]: E1209 14:22:51.509999 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="3.2s" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.622804 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4" exitCode=0 Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.622860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.623017 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.624384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.624440 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.624462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.626953 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"80d0ab7d6e32320846464de8190db434e79cb847bd564202d3ed3b8f84eb1853"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.627072 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.628060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.628088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.628098 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.635819 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3cdb3d7b981af632cfe211bf77cec39078183e1bf00340512775e3187ba7a260"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.635870 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7cb50c0d2ebf1a5fb054060797889366776d97a07ede4ab2c1502e8769d6d8b6"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.635883 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bd8d48bca138e010498dc1d434a1efa683e67f2176336efa0ce20ca32f4e27c2"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.635920 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.636807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.636837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.636847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.639418 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.639409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.639453 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.639454 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.639559 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.639585 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.639602 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0"} Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.640297 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.640373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.640401 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.640325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.640674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.640786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.835092 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.836339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.836379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.836389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:51 crc kubenswrapper[4770]: I1209 14:22:51.836418 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.256625 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.455335 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.643985 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf" exitCode=0 Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.644094 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.644105 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.644126 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf"} Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.644208 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.644241 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.644285 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.645110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.645145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.645164 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.645408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.645441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.645456 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.645971 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.646031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.646044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.646092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.646115 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:52 crc kubenswrapper[4770]: I1209 14:22:52.646056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.652249 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9"} Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.652322 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.652326 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d"} Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.652468 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.652480 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242"} Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.653494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.653540 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.653559 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.653929 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.653974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:53 crc kubenswrapper[4770]: I1209 14:22:53.653992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.117178 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.117363 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.118764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.118826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.118843 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.258932 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.660016 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.660521 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.660801 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325"} Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.660848 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682"} Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.661093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.661121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.661130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.661286 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.661319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:54 crc kubenswrapper[4770]: I1209 14:22:54.661333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.401492 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.662075 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.662989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.663021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.663039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.696475 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.696662 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.697899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.697944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:55 crc kubenswrapper[4770]: I1209 14:22:55.697957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.250872 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.664690 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.664751 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.665823 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.665855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.665865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.666088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.666141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:56 crc kubenswrapper[4770]: I1209 14:22:56.666150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:57 crc kubenswrapper[4770]: I1209 14:22:57.979021 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:57 crc kubenswrapper[4770]: I1209 14:22:57.979257 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:57 crc kubenswrapper[4770]: I1209 14:22:57.981148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:57 crc kubenswrapper[4770]: I1209 14:22:57.981236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:57 crc kubenswrapper[4770]: I1209 14:22:57.981263 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:57 crc kubenswrapper[4770]: I1209 14:22:57.986533 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.383927 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.384127 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.385289 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.385322 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.385334 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.670888 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.672062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.672102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.672120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.697438 4770 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:22:58 crc kubenswrapper[4770]: I1209 14:22:58.697528 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:22:58 crc kubenswrapper[4770]: E1209 14:22:58.728819 4770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:23:01 crc kubenswrapper[4770]: W1209 14:23:01.813864 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 09 14:23:01 crc kubenswrapper[4770]: I1209 14:23:01.815063 4770 trace.go:236] Trace[1992605622]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:22:51.812) (total time: 10001ms): Dec 09 14:23:01 crc kubenswrapper[4770]: Trace[1992605622]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:23:01.813) Dec 09 14:23:01 crc kubenswrapper[4770]: Trace[1992605622]: [10.001774525s] [10.001774525s] END Dec 09 14:23:01 crc kubenswrapper[4770]: E1209 14:23:01.815220 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 09 14:23:01 crc kubenswrapper[4770]: E1209 14:23:01.838086 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 09 14:23:01 crc kubenswrapper[4770]: W1209 14:23:01.923957 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 09 14:23:01 crc kubenswrapper[4770]: I1209 14:23:01.924033 4770 trace.go:236] Trace[794917750]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:22:51.922) (total time: 10001ms): Dec 09 14:23:01 crc kubenswrapper[4770]: Trace[794917750]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:23:01.923) Dec 09 14:23:01 crc kubenswrapper[4770]: Trace[794917750]: [10.001448397s] [10.001448397s] END Dec 09 14:23:01 crc kubenswrapper[4770]: E1209 14:23:01.924054 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 09 14:23:02 crc kubenswrapper[4770]: I1209 14:23:02.426600 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:23:02 crc kubenswrapper[4770]: I1209 14:23:02.426744 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:23:02 crc kubenswrapper[4770]: I1209 14:23:02.431042 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:23:02 crc kubenswrapper[4770]: I1209 14:23:02.431111 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:23:02 crc kubenswrapper[4770]: I1209 14:23:02.464441 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:23:02 crc kubenswrapper[4770]: I1209 14:23:02.464529 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:23:04 crc kubenswrapper[4770]: I1209 14:23:04.127059 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:23:04 crc kubenswrapper[4770]: I1209 14:23:04.127787 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:23:04 crc kubenswrapper[4770]: I1209 14:23:04.129146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:04 crc kubenswrapper[4770]: I1209 14:23:04.129192 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:04 crc kubenswrapper[4770]: I1209 14:23:04.129201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:05 crc kubenswrapper[4770]: I1209 14:23:05.038682 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:23:05 crc kubenswrapper[4770]: I1209 14:23:05.040560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:05 crc kubenswrapper[4770]: I1209 14:23:05.040637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:05 crc kubenswrapper[4770]: I1209 14:23:05.040652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:05 crc kubenswrapper[4770]: I1209 14:23:05.040699 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 14:23:05 crc kubenswrapper[4770]: E1209 14:23:05.047581 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 09 14:23:05 crc kubenswrapper[4770]: I1209 14:23:05.480006 4770 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.346581 4770 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.470530 4770 apiserver.go:52] "Watching apiserver" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.483080 4770 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.483382 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.483951 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.484027 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.484035 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.484044 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.484162 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:06 crc kubenswrapper[4770]: E1209 14:23:06.484249 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.484275 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:06 crc kubenswrapper[4770]: E1209 14:23:06.484539 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:06 crc kubenswrapper[4770]: E1209 14:23:06.484753 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.487249 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.487649 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.487652 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.487760 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.487802 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.488023 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.488305 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.489065 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.490124 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.500138 4770 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.516701 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.533443 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.548039 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.562515 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.578120 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.588083 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:06 crc kubenswrapper[4770]: I1209 14:23:06.600343 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.415460 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.417782 4770 trace.go:236] Trace[894627803]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:22:52.614) (total time: 14803ms): Dec 09 14:23:07 crc kubenswrapper[4770]: Trace[894627803]: ---"Objects listed" error: 14803ms (14:23:07.417) Dec 09 14:23:07 crc kubenswrapper[4770]: Trace[894627803]: [14.803320236s] [14.803320236s] END Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.417825 4770 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.419165 4770 trace.go:236] Trace[781361329]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:22:55.995) (total time: 11423ms): Dec 09 14:23:07 crc kubenswrapper[4770]: Trace[781361329]: ---"Objects listed" error: 11423ms (14:23:07.418) Dec 09 14:23:07 crc kubenswrapper[4770]: Trace[781361329]: [11.423471744s] [11.423471744s] END Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.419367 4770 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.423496 4770 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.460326 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.466585 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.480290 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.483992 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.509539 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.515965 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.519751 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.523856 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.523897 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.523919 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.523941 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.523960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.523980 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524029 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524057 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524078 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524095 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524110 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524128 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524143 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524159 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524173 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524188 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524206 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524226 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524241 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524259 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524294 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524311 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524345 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524364 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524381 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524396 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524416 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524432 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524450 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524468 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524572 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524593 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524595 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524611 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524628 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524645 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524663 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524679 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524694 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524710 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524749 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524786 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524811 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524836 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524874 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524901 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524924 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.524988 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525012 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525036 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525061 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525084 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525105 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525128 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525152 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525175 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525197 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525217 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525236 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525252 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525274 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525298 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525323 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525317 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525348 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525343 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525373 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525397 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525394 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525419 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525504 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525542 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525571 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525600 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525625 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525649 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525506 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525571 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525580 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527004 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525641 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.525670 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:23:08.025646782 +0000 UTC m=+19.921849028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527059 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527079 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527098 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527119 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527124 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527139 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527158 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527176 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527193 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527211 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527228 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527244 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527261 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527278 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527295 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527316 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527338 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527356 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527374 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527392 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527409 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527427 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527445 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527465 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527485 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527503 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527523 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527544 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527565 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527585 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527602 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527621 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527640 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527658 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527673 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527690 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527707 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527742 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527760 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527776 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527794 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527815 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527830 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527847 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527864 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527883 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527902 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527922 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527941 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527989 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528005 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528023 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528038 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528055 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528071 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528086 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528102 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528121 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528137 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528155 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528171 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528187 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528207 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528230 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528254 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528288 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528313 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528337 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528361 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528383 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528406 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528425 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528447 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528466 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528483 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528503 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528519 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528536 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528554 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528570 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528587 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528603 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528620 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528635 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528657 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528675 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528693 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528709 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528756 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528776 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528793 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528808 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528827 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528843 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528861 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528876 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528893 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528909 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528927 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528944 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528959 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528975 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528992 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.529008 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.529026 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.532391 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.532590 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.532691 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.532802 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.532883 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.532962 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533038 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533109 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533208 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533287 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533369 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533446 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533527 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533605 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533689 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533794 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533872 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533953 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534082 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534198 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534289 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534384 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534551 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534651 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534754 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534834 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534919 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535003 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535086 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535245 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535388 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535534 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535603 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527129 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525777 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525919 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525971 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526183 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526247 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526277 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526358 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526339 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526390 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526470 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526513 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526573 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535875 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526650 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526691 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526708 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526798 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.536009 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526927 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526933 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526955 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526980 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527220 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527240 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527341 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527433 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527450 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527546 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.527898 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528213 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528277 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525259 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.528357 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.529236 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.525688 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.532901 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533095 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533108 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533277 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533406 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533483 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533528 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533570 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533627 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533795 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.533791 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534162 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534068 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534287 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534526 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534605 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534693 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.534914 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535228 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535280 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.536217 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.536430 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.536460 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535379 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535452 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.536500 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535558 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.526601 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535874 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.536026 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.536681 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.536826 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.537136 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.537539 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.537571 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.538306 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.538421 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.538543 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.538966 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539052 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.535664 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539197 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539213 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539226 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539237 4770 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539250 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539263 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539261 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539274 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539409 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539435 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539644 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539656 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539751 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.539927 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.540002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.540080 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.540156 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.540253 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.540265 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.540271 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.540396 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.540535 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.541004 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.541223 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.541461 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.541483 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.541638 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.541765 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.541824 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.542100 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.542109 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.542162 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.542185 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.542390 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.542544 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.542635 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.542705 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.543366 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.543355 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.543871 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.543964 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:08.043941879 +0000 UTC m=+19.940144205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.544237 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.544448 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.545092 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.545296 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:08.045269052 +0000 UTC m=+19.941471378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.545648 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.548287 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.548455 4770 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.548498 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.548580 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.548852 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.548896 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.549123 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.549184 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.549224 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.550601 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.551866 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.551897 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.552423 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.561448 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.562040 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.562053 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.562322 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.562360 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.562379 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.562621 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:08.062454491 +0000 UTC m=+19.958656627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.563041 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.563077 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.563098 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.563238 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:08.063159428 +0000 UTC m=+19.959361564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.566331 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.566444 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.566640 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.566897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.566624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.570038 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.572781 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.573189 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.573504 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.573601 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.573689 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.574090 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.574756 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.574804 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.574927 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.575127 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.577145 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.577177 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.577257 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.578793 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.578896 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.579064 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.579658 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.581868 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.586168 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.586740 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.587112 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.587535 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.587562 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.587877 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.587901 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.588142 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.588458 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.589397 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.589624 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.589787 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.590839 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.591032 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.591056 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.591102 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.591113 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.591460 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.591792 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.592339 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.592438 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.592697 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.592477 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.593009 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.593585 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.593672 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.593917 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.594507 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.594573 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.594876 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.595526 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.595910 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.597038 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.601631 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.601639 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.602143 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.602598 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.602900 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.603117 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.603887 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.603914 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.604015 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.604375 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.614426 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.618017 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.629533 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.631097 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.638533 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642227 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642283 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642359 4770 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642377 4770 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642391 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642400 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642410 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642422 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642432 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642428 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642442 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642509 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642521 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642532 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642543 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642552 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642562 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642571 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642581 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642589 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642598 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642607 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642617 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642628 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642640 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642650 4770 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642660 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642670 4770 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642681 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642694 4770 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642720 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642744 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642753 4770 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642762 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642772 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642781 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642801 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642811 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642819 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642838 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642847 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642856 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642865 4770 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642874 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642883 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642892 4770 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642902 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642911 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642928 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642938 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642947 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642956 4770 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642965 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642973 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642982 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642991 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643000 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643009 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643018 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643027 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643037 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643046 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643055 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643064 4770 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643072 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643081 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643089 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643098 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643106 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643120 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643128 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643147 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643157 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643167 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643175 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643184 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643192 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643205 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643218 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643228 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643237 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643246 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643257 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643268 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643278 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643288 4770 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643296 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643304 4770 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643313 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643323 4770 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643333 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643341 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643351 4770 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643359 4770 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643368 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643377 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643386 4770 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643395 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643404 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643412 4770 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643421 4770 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643431 4770 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643441 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643449 4770 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643459 4770 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643467 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643475 4770 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643483 4770 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643491 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643500 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643515 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643523 4770 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643531 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643540 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643549 4770 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643558 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643566 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643575 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643583 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643591 4770 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643600 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643608 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643617 4770 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643625 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643633 4770 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643641 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643649 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643671 4770 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643679 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643687 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643695 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643703 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643711 4770 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643719 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643780 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643789 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643796 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643804 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643813 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643821 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643829 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643838 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643847 4770 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643855 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643864 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643873 4770 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643882 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643892 4770 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643912 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643920 4770 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643929 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643936 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643944 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643953 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643962 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643970 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643985 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.643993 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644001 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644009 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644018 4770 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644027 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644035 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644043 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644051 4770 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644059 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644068 4770 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644078 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644085 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644094 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644102 4770 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644110 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644119 4770 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644128 4770 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644141 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644150 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644160 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644169 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644177 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644186 4770 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644195 4770 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644204 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644212 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644221 4770 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644229 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644238 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.644245 4770 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.642441 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.646628 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.657983 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.669139 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: E1209 14:23:07.707003 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.712169 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.723748 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 14:23:07 crc kubenswrapper[4770]: W1209 14:23:07.724527 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-aba2c1692304037a059fb3befb3e2e27052d083e4ad32a0c74e6831c05975b99 WatchSource:0}: Error finding container aba2c1692304037a059fb3befb3e2e27052d083e4ad32a0c74e6831c05975b99: Status 404 returned error can't find the container with id aba2c1692304037a059fb3befb3e2e27052d083e4ad32a0c74e6831c05975b99 Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.728960 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 14:23:07 crc kubenswrapper[4770]: W1209 14:23:07.736176 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-fc433b7de1a23b5020306ed92022aebb37562a279bb36ea1e38317e221651307 WatchSource:0}: Error finding container fc433b7de1a23b5020306ed92022aebb37562a279bb36ea1e38317e221651307: Status 404 returned error can't find the container with id fc433b7de1a23b5020306ed92022aebb37562a279bb36ea1e38317e221651307 Dec 09 14:23:07 crc kubenswrapper[4770]: W1209 14:23:07.741832 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-157b36c1374e15ea923db57d757f554897f16bf11dd3cf84dc23ef0d66bfdd4d WatchSource:0}: Error finding container 157b36c1374e15ea923db57d757f554897f16bf11dd3cf84dc23ef0d66bfdd4d: Status 404 returned error can't find the container with id 157b36c1374e15ea923db57d757f554897f16bf11dd3cf84dc23ef0d66bfdd4d Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.838973 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.844164 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.855091 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.856069 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.866323 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.879057 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.896559 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.910288 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.924476 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.935882 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.948354 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.958303 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.972761 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:07 crc kubenswrapper[4770]: I1209 14:23:07.986467 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.001261 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.012023 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.029455 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.042284 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.047475 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.047547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.047571 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.047683 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.047685 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:23:09.04765248 +0000 UTC m=+20.943854616 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.047754 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:09.047718102 +0000 UTC m=+20.943920238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.047769 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.047808 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:09.047800614 +0000 UTC m=+20.944002750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.148676 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.148794 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.148962 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.148998 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.149012 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.149025 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.149033 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.149044 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.149110 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:09.149088485 +0000 UTC m=+21.045290831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.149138 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:09.149127576 +0000 UTC m=+21.045329952 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.414747 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.431679 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.433922 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.435091 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.446842 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.464510 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.481554 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.499539 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.512686 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.532818 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.547637 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.565323 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.581638 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.588641 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.588664 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.588718 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.588859 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.589022 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:08 crc kubenswrapper[4770]: E1209 14:23:08.589116 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.593102 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.594272 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.596351 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.597344 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.597787 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.599693 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.600758 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.601912 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.603752 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.605010 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.606841 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.607856 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.609945 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.610989 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.612053 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.613742 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.614462 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.615717 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.616343 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.617127 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.618390 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.619015 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.620497 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.621126 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.622069 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.623021 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.623694 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.624294 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.624420 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.624965 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.625677 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.627999 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.628499 4770 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.628626 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.630834 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.631438 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.631985 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.633835 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.634933 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.635538 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.636784 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.638664 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.638974 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.639556 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.641424 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.643270 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.644189 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.645543 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.646559 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.647154 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.648449 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.650103 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.651029 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.652240 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.652983 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.653438 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.653992 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.655388 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.669566 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.681042 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.693689 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.699369 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2"} Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.699406 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100"} Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.699417 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fc433b7de1a23b5020306ed92022aebb37562a279bb36ea1e38317e221651307"} Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.700483 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"aba2c1692304037a059fb3befb3e2e27052d083e4ad32a0c74e6831c05975b99"} Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.701751 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc"} Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.702703 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"157b36c1374e15ea923db57d757f554897f16bf11dd3cf84dc23ef0d66bfdd4d"} Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.714711 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.728995 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.745466 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.760603 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.776673 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.797979 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.815418 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.832460 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.862160 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.880688 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.896672 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.915394 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.935082 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:08 crc kubenswrapper[4770]: I1209 14:23:08.958014 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.011128 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.058166 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.058230 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.058252 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.058362 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.058390 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.058398 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:23:11.058366606 +0000 UTC m=+22.954568752 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.058508 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:11.058490059 +0000 UTC m=+22.954692195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.058530 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:11.05852234 +0000 UTC m=+22.954724596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.074632 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.110388 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.138342 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.158743 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.158814 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.158890 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.158920 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.158931 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.158941 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.158960 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.158972 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.158987 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:11.158971959 +0000 UTC m=+23.055174095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:09 crc kubenswrapper[4770]: E1209 14:23:09.159023 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:11.1590049 +0000 UTC m=+23.055207126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.463783 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2n6xq"] Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.464150 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2n6xq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.466843 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.468461 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.474482 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.487508 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.512671 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.528194 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.540521 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.558847 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.562347 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3cfd8e64-8449-4d75-98b0-b98f94026bb7-hosts-file\") pod \"node-resolver-2n6xq\" (UID: \"3cfd8e64-8449-4d75-98b0-b98f94026bb7\") " pod="openshift-dns/node-resolver-2n6xq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.562519 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kknzh\" (UniqueName: \"kubernetes.io/projected/3cfd8e64-8449-4d75-98b0-b98f94026bb7-kube-api-access-kknzh\") pod \"node-resolver-2n6xq\" (UID: \"3cfd8e64-8449-4d75-98b0-b98f94026bb7\") " pod="openshift-dns/node-resolver-2n6xq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.586367 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.607132 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.646911 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.663836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3cfd8e64-8449-4d75-98b0-b98f94026bb7-hosts-file\") pod \"node-resolver-2n6xq\" (UID: \"3cfd8e64-8449-4d75-98b0-b98f94026bb7\") " pod="openshift-dns/node-resolver-2n6xq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.663889 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kknzh\" (UniqueName: \"kubernetes.io/projected/3cfd8e64-8449-4d75-98b0-b98f94026bb7-kube-api-access-kknzh\") pod \"node-resolver-2n6xq\" (UID: \"3cfd8e64-8449-4d75-98b0-b98f94026bb7\") " pod="openshift-dns/node-resolver-2n6xq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.664007 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3cfd8e64-8449-4d75-98b0-b98f94026bb7-hosts-file\") pod \"node-resolver-2n6xq\" (UID: \"3cfd8e64-8449-4d75-98b0-b98f94026bb7\") " pod="openshift-dns/node-resolver-2n6xq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.679119 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.694676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kknzh\" (UniqueName: \"kubernetes.io/projected/3cfd8e64-8449-4d75-98b0-b98f94026bb7-kube-api-access-kknzh\") pod \"node-resolver-2n6xq\" (UID: \"3cfd8e64-8449-4d75-98b0-b98f94026bb7\") " pod="openshift-dns/node-resolver-2n6xq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.716196 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.782093 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2n6xq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.877098 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h5dw2"] Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.877528 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.878580 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fbhnj"] Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.878805 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.879019 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2kpd7"] Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.879470 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.879773 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.881148 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.881326 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.881527 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.883704 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.883949 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.884112 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.884165 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.884223 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.884110 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.885061 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.885148 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.890680 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k4btz"] Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.891626 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.900281 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.900361 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.901646 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.901862 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.902065 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.903654 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.909656 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.917758 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.940072 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966716 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-cni-multus\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966903 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966920 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-netns\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966936 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-ovn-kubernetes\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966950 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-env-overrides\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966963 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-cnibin\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-hostroot\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966902 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.966993 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-node-log\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967147 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-cnibin\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967164 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-systemd-units\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967177 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1d646c4-f044-4dcd-91d5-44034f746659-cni-binary-copy\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967193 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-netns\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967206 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-log-socket\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967227 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-cni-bin\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967266 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-daemon-config\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967281 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjt6c\" (UniqueName: \"kubernetes.io/projected/c38553c5-6cc9-435b-8c52-3262b861d1cf-kube-api-access-xjt6c\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967296 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-etc-kubernetes\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967349 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-kubelet\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967370 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1d646c4-f044-4dcd-91d5-44034f746659-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967385 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4kv5\" (UniqueName: \"kubernetes.io/projected/51498c5e-9a5a-426a-aac1-0da87076675a-kube-api-access-d4kv5\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967462 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-kubelet\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-script-lib\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967525 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-system-cni-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967548 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-cni-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967567 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-k8s-cni-cncf-io\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967587 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/51498c5e-9a5a-426a-aac1-0da87076675a-rootfs\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967607 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lms92\" (UniqueName: \"kubernetes.io/projected/a1d646c4-f044-4dcd-91d5-44034f746659-kube-api-access-lms92\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967630 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-netd\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967716 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-conf-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967781 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-multus-certs\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967804 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-systemd\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967821 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967838 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-socket-dir-parent\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967854 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51498c5e-9a5a-426a-aac1-0da87076675a-proxy-tls\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-slash\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967885 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-etc-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967899 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-config\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967912 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-os-release\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967926 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-var-lib-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967940 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-ovn\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.967997 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51498c5e-9a5a-426a-aac1-0da87076675a-mcd-auth-proxy-config\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.968039 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.968073 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-bin\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.968087 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpm77\" (UniqueName: \"kubernetes.io/projected/39aa66d3-1416-4178-a4bc-34179463fd45-kube-api-access-hpm77\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.968105 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-system-cni-dir\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.968121 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-os-release\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.968135 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c38553c5-6cc9-435b-8c52-3262b861d1cf-cni-binary-copy\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.968149 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39aa66d3-1416-4178-a4bc-34179463fd45-ovn-node-metrics-cert\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.982886 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:09 crc kubenswrapper[4770]: I1209 14:23:09.995641 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:09Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.006158 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.019389 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.031922 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.042068 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.054278 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.064504 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.068879 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-netns\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.068917 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-ovn-kubernetes\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.068938 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-cnibin\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.068957 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-hostroot\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.068973 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-node-log\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.068989 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-env-overrides\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069007 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-cnibin\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069024 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-systemd-units\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069036 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-ovn-kubernetes\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069063 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-cni-bin\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069064 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-hostroot\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069082 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-daemon-config\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069025 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-netns\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069115 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-node-log\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069119 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-cnibin\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069150 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-cni-bin\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069157 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjt6c\" (UniqueName: \"kubernetes.io/projected/c38553c5-6cc9-435b-8c52-3262b861d1cf-kube-api-access-xjt6c\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069173 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-cnibin\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069180 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1d646c4-f044-4dcd-91d5-44034f746659-cni-binary-copy\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069216 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-systemd-units\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069224 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-netns\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-log-socket\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069317 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-etc-kubernetes\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069348 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-log-socket\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069369 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-netns\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069397 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-etc-kubernetes\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069430 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-kubelet\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069453 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1d646c4-f044-4dcd-91d5-44034f746659-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069505 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-system-cni-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069529 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-cni-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069551 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-k8s-cni-cncf-io\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/51498c5e-9a5a-426a-aac1-0da87076675a-rootfs\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069654 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4kv5\" (UniqueName: \"kubernetes.io/projected/51498c5e-9a5a-426a-aac1-0da87076675a-kube-api-access-d4kv5\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-kubelet\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069704 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-script-lib\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069757 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-netd\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069786 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lms92\" (UniqueName: \"kubernetes.io/projected/a1d646c4-f044-4dcd-91d5-44034f746659-kube-api-access-lms92\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069837 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1d646c4-f044-4dcd-91d5-44034f746659-cni-binary-copy\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069889 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-conf-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069913 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-multus-certs\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069938 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069963 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-systemd\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069982 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-daemon-config\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.069998 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51498c5e-9a5a-426a-aac1-0da87076675a-proxy-tls\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070020 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-slash\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070021 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-env-overrides\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070076 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-conf-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070096 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-kubelet\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070112 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-system-cni-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070230 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-cni-dir\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070246 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/51498c5e-9a5a-426a-aac1-0da87076675a-rootfs\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070254 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-kubelet\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070274 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-etc-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070280 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-multus-certs\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070291 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-config\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070300 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-slash\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070306 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-socket-dir-parent\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070335 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-multus-socket-dir-parent\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070336 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-var-lib-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070528 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-ovn\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070572 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-os-release\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070598 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-run-k8s-cni-cncf-io\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070620 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51498c5e-9a5a-426a-aac1-0da87076675a-mcd-auth-proxy-config\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070633 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-var-lib-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070648 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070659 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070684 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-systemd\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070705 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-netd\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070709 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-etc-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070767 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-script-lib\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070833 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-system-cni-dir\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070850 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-bin\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070877 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-openvswitch\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070875 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-ovn\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070948 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-os-release\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070950 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-bin\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070966 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpm77\" (UniqueName: \"kubernetes.io/projected/39aa66d3-1416-4178-a4bc-34179463fd45-kube-api-access-hpm77\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.070977 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-system-cni-dir\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071011 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-os-release\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071038 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c38553c5-6cc9-435b-8c52-3262b861d1cf-cni-binary-copy\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071061 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-os-release\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071080 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39aa66d3-1416-4178-a4bc-34179463fd45-ovn-node-metrics-cert\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071124 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-cni-multus\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071146 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071213 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c38553c5-6cc9-435b-8c52-3262b861d1cf-host-var-lib-cni-multus\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071476 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1d646c4-f044-4dcd-91d5-44034f746659-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071641 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-config\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071650 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51498c5e-9a5a-426a-aac1-0da87076675a-mcd-auth-proxy-config\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.071643 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c38553c5-6cc9-435b-8c52-3262b861d1cf-cni-binary-copy\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.075321 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51498c5e-9a5a-426a-aac1-0da87076675a-proxy-tls\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.075379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39aa66d3-1416-4178-a4bc-34179463fd45-ovn-node-metrics-cert\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.087784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4kv5\" (UniqueName: \"kubernetes.io/projected/51498c5e-9a5a-426a-aac1-0da87076675a-kube-api-access-d4kv5\") pod \"machine-config-daemon-fbhnj\" (UID: \"51498c5e-9a5a-426a-aac1-0da87076675a\") " pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.088854 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.089352 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjt6c\" (UniqueName: \"kubernetes.io/projected/c38553c5-6cc9-435b-8c52-3262b861d1cf-kube-api-access-xjt6c\") pod \"multus-h5dw2\" (UID: \"c38553c5-6cc9-435b-8c52-3262b861d1cf\") " pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.090142 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lms92\" (UniqueName: \"kubernetes.io/projected/a1d646c4-f044-4dcd-91d5-44034f746659-kube-api-access-lms92\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.091867 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpm77\" (UniqueName: \"kubernetes.io/projected/39aa66d3-1416-4178-a4bc-34179463fd45-kube-api-access-hpm77\") pod \"ovnkube-node-k4btz\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.101347 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.111963 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.121830 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.135205 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.155210 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.169607 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.180682 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.192266 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.193372 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h5dw2" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.201674 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.207122 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.218092 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.223545 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.234012 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.247389 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.262685 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.445811 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1d646c4-f044-4dcd-91d5-44034f746659-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2kpd7\" (UID: \"a1d646c4-f044-4dcd-91d5-44034f746659\") " pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: W1209 14:23:10.447448 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc38553c5_6cc9_435b_8c52_3262b861d1cf.slice/crio-5bcf2e9a23c49bc4811eeca1559078c49f02a0f206de8fe1c24ac19a78508180 WatchSource:0}: Error finding container 5bcf2e9a23c49bc4811eeca1559078c49f02a0f206de8fe1c24ac19a78508180: Status 404 returned error can't find the container with id 5bcf2e9a23c49bc4811eeca1559078c49f02a0f206de8fe1c24ac19a78508180 Dec 09 14:23:10 crc kubenswrapper[4770]: W1209 14:23:10.460828 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51498c5e_9a5a_426a_aac1_0da87076675a.slice/crio-3559068e98b85b693702ba561144eea6fc396f83cd0db86e1b6e47d2697dda8f WatchSource:0}: Error finding container 3559068e98b85b693702ba561144eea6fc396f83cd0db86e1b6e47d2697dda8f: Status 404 returned error can't find the container with id 3559068e98b85b693702ba561144eea6fc396f83cd0db86e1b6e47d2697dda8f Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.515290 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" Dec 09 14:23:10 crc kubenswrapper[4770]: W1209 14:23:10.527816 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39aa66d3_1416_4178_a4bc_34179463fd45.slice/crio-f3697b6905a040b8726353af6682f59f50aa7faa9f4fc517d9227165fdd0cb19 WatchSource:0}: Error finding container f3697b6905a040b8726353af6682f59f50aa7faa9f4fc517d9227165fdd0cb19: Status 404 returned error can't find the container with id f3697b6905a040b8726353af6682f59f50aa7faa9f4fc517d9227165fdd0cb19 Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.590388 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:10 crc kubenswrapper[4770]: E1209 14:23:10.590582 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.590613 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:10 crc kubenswrapper[4770]: E1209 14:23:10.590834 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.591013 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:10 crc kubenswrapper[4770]: E1209 14:23:10.591126 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.708088 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerStarted","Data":"5c24713ae94ff6bc27d4af0240449473ed99861778de2e647d61e8645a33f97f"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.710469 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.710511 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"3559068e98b85b693702ba561144eea6fc396f83cd0db86e1b6e47d2697dda8f"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.712812 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h5dw2" event={"ID":"c38553c5-6cc9-435b-8c52-3262b861d1cf","Type":"ContainerStarted","Data":"08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.712854 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h5dw2" event={"ID":"c38553c5-6cc9-435b-8c52-3262b861d1cf","Type":"ContainerStarted","Data":"5bcf2e9a23c49bc4811eeca1559078c49f02a0f206de8fe1c24ac19a78508180"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.714766 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2n6xq" event={"ID":"3cfd8e64-8449-4d75-98b0-b98f94026bb7","Type":"ContainerStarted","Data":"ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.714803 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2n6xq" event={"ID":"3cfd8e64-8449-4d75-98b0-b98f94026bb7","Type":"ContainerStarted","Data":"4a4025ecfb268402699dfb9f6ff355bf5b51493781fd00b324b5d17f7808d6e6"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.717790 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3" exitCode=0 Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.717834 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.717853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"f3697b6905a040b8726353af6682f59f50aa7faa9f4fc517d9227165fdd0cb19"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.720494 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45"} Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.727070 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.736866 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.751208 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.765135 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.779289 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.792973 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.806044 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.823220 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.879206 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.902092 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.925794 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.939221 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.950819 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.968465 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.981159 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:10 crc kubenswrapper[4770]: I1209 14:23:10.996881 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:10Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.009690 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.029392 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.042274 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.057810 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.076261 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.080691 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.080794 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.080827 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:23:15.080801633 +0000 UTC m=+26.977003799 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.080865 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.080888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.080906 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:15.080894635 +0000 UTC m=+26.977096771 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.081071 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.081166 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:15.081144601 +0000 UTC m=+26.977346787 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.087899 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.099177 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.126026 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.169603 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.181984 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.182027 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.182138 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.182143 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.182184 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.182196 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.182249 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:15.182233136 +0000 UTC m=+27.078435272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.182153 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.182276 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.182314 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:15.182300288 +0000 UTC m=+27.078502424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.207893 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.248089 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.285846 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.448531 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.450770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.450812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.450822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.450923 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.457761 4770 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.458005 4770 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.458935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.458979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.458990 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.459007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.459018 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.486029 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.489950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.489995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.490006 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.490021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.490033 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.500988 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.504292 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.504318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.504325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.504450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.504462 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.517859 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.522162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.522201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.522210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.522222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.522232 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.538849 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.542572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.542598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.542607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.542619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.542628 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.554248 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: E1209 14:23:11.554411 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.556069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.556117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.556134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.556152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.556168 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.658109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.658267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.658335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.658395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.658451 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.729136 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.729177 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.729187 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.729195 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.729206 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.729216 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.730829 4770 generic.go:334] "Generic (PLEG): container finished" podID="a1d646c4-f044-4dcd-91d5-44034f746659" containerID="120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9" exitCode=0 Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.730890 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerDied","Data":"120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.734863 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.745566 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.760375 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.767642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.767686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.767695 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.767714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.767738 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.774659 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.785480 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.809263 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.823204 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.837185 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.870077 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.875269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.875349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.875366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.875392 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.875411 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.883241 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.895953 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.909676 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.924003 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.936620 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.951658 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.970678 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.977860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.977903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.977913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.977928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.977939 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:11Z","lastTransitionTime":"2025-12-09T14:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:11 crc kubenswrapper[4770]: I1209 14:23:11.984486 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.013569 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.046273 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.079873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.079918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.079934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.079952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.079963 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.084331 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.130212 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.131107 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9sc42"] Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.131476 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.157448 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.177008 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.182150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.182180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.182190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.182205 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.182216 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.193091 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35f69e34-efd9-4905-9617-5e16997dbae3-host\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.193132 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnbvc\" (UniqueName: \"kubernetes.io/projected/35f69e34-efd9-4905-9617-5e16997dbae3-kube-api-access-jnbvc\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.193166 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35f69e34-efd9-4905-9617-5e16997dbae3-serviceca\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.196573 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.216878 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.269066 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.284761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.284797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.284807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.284820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.284830 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.287885 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.294242 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35f69e34-efd9-4905-9617-5e16997dbae3-host\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.294284 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnbvc\" (UniqueName: \"kubernetes.io/projected/35f69e34-efd9-4905-9617-5e16997dbae3-kube-api-access-jnbvc\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.294319 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35f69e34-efd9-4905-9617-5e16997dbae3-serviceca\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.294436 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35f69e34-efd9-4905-9617-5e16997dbae3-host\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.295197 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35f69e34-efd9-4905-9617-5e16997dbae3-serviceca\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.335013 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnbvc\" (UniqueName: \"kubernetes.io/projected/35f69e34-efd9-4905-9617-5e16997dbae3-kube-api-access-jnbvc\") pod \"node-ca-9sc42\" (UID: \"35f69e34-efd9-4905-9617-5e16997dbae3\") " pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.348796 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.386936 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.387711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.387764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.387776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.387794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.387805 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.426459 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.443690 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9sc42" Dec 09 14:23:12 crc kubenswrapper[4770]: W1209 14:23:12.454454 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35f69e34_efd9_4905_9617_5e16997dbae3.slice/crio-8f81092717e779da9d87d1e8041f55c9086dba43a8ae2f89ee9f1c168a434e1b WatchSource:0}: Error finding container 8f81092717e779da9d87d1e8041f55c9086dba43a8ae2f89ee9f1c168a434e1b: Status 404 returned error can't find the container with id 8f81092717e779da9d87d1e8041f55c9086dba43a8ae2f89ee9f1c168a434e1b Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.467794 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.490651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.491076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.491087 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.491102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.491112 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.506260 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.547107 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.587795 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.587824 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:12 crc kubenswrapper[4770]: E1209 14:23:12.587914 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.587973 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:12 crc kubenswrapper[4770]: E1209 14:23:12.588103 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:12 crc kubenswrapper[4770]: E1209 14:23:12.588180 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.593450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.593475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.593483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.593496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.593505 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.597735 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.624410 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.670040 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.696157 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.696199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.696210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.696224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.696234 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.704214 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.738307 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9sc42" event={"ID":"35f69e34-efd9-4905-9617-5e16997dbae3","Type":"ContainerStarted","Data":"63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.738368 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9sc42" event={"ID":"35f69e34-efd9-4905-9617-5e16997dbae3","Type":"ContainerStarted","Data":"8f81092717e779da9d87d1e8041f55c9086dba43a8ae2f89ee9f1c168a434e1b"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.740149 4770 generic.go:334] "Generic (PLEG): container finished" podID="a1d646c4-f044-4dcd-91d5-44034f746659" containerID="a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631" exitCode=0 Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.740220 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerDied","Data":"a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.746617 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.789963 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.798085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.798133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.798144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.798161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.798171 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.828036 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.865747 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.900719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.900805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.900817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.900832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.900841 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:12Z","lastTransitionTime":"2025-12-09T14:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.908209 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.947641 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:12 crc kubenswrapper[4770]: I1209 14:23:12.987166 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:12Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.002422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.002460 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.002471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.002485 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.002496 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.026922 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.066953 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.104857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.104919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.104938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.104962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.104979 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.106050 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.154816 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.194670 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.207011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.207059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.207076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.207098 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.207115 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.228489 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.265359 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.303266 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.308815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.308839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.308847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.308860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.308868 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.349355 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.392238 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.411492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.411533 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.411543 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.411557 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.411567 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.428405 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.469166 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.510596 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.514307 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.514334 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.514343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.514356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.514365 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.555441 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.584225 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.616556 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.616610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.616627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.616652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.616669 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.629255 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.667114 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.710451 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.718321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.718360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.718372 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.718391 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.718403 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.744953 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.746802 4770 generic.go:334] "Generic (PLEG): container finished" podID="a1d646c4-f044-4dcd-91d5-44034f746659" containerID="bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70" exitCode=0 Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.746848 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerDied","Data":"bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.785845 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.824262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.824312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.824324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.824341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.824352 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.825088 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.868372 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.913614 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.926533 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.926578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.926587 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.926600 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.926609 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:13Z","lastTransitionTime":"2025-12-09T14:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.945850 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:13 crc kubenswrapper[4770]: I1209 14:23:13.987876 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.024882 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.028351 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.028387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.028395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.028409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.028419 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.066353 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.106395 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.130480 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.130528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.130539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.130551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.130560 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.148277 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.184863 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.226259 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.232742 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.232777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.232797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.232812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.232822 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.264058 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.310287 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.336044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.336105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.336121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.336145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.336162 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.345488 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.439453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.439518 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.439546 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.439571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.439586 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.542597 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.542642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.542655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.542670 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.542679 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.587454 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.587501 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:14 crc kubenswrapper[4770]: E1209 14:23:14.587595 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.587448 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:14 crc kubenswrapper[4770]: E1209 14:23:14.587706 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:14 crc kubenswrapper[4770]: E1209 14:23:14.587898 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.645493 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.645544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.645555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.645572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.645584 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.747810 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.747844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.747853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.747866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.747875 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.756216 4770 generic.go:334] "Generic (PLEG): container finished" podID="a1d646c4-f044-4dcd-91d5-44034f746659" containerID="888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c" exitCode=0 Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.756270 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerDied","Data":"888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.786521 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.795652 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.812537 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.824949 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.836593 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.848509 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.849996 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.850311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.850481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.850571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.850674 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.858594 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.881329 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.891998 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.902748 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.923446 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.938476 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.947955 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.953143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.953184 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.953195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.953212 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.953224 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:14Z","lastTransitionTime":"2025-12-09T14:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.958684 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.975508 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:14 crc kubenswrapper[4770]: I1209 14:23:14.992411 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:14Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.055513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.055577 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.055597 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.055620 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.055637 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.152575 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.152702 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.152769 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.152946 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.153007 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:23.152993929 +0000 UTC m=+35.049196065 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.153045 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:23:23.153015659 +0000 UTC m=+35.049217795 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.153068 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.153189 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:23.153160293 +0000 UTC m=+35.049362459 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.158029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.158057 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.158067 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.158081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.158089 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.253305 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.253409 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.253578 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.253601 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.253620 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.253632 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.253680 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.253696 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:23.253676104 +0000 UTC m=+35.149878280 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.253700 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:15 crc kubenswrapper[4770]: E1209 14:23:15.253817 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:23.253792386 +0000 UTC m=+35.149994562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.259523 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.259572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.259590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.259612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.259629 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.362295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.362331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.362340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.362355 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.362366 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.464520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.464887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.464899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.464917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.464930 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.566780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.566817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.566832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.566849 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.566863 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.668844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.668881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.668890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.668906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.668917 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.770987 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.771029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.771041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.771056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.771070 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.795270 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerStarted","Data":"87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.811480 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.824134 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.838880 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.857585 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.871594 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.873327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.873373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.873383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.873399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.873409 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.938693 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.951124 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.963412 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.980233 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.981272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.981397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.981412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.981437 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.981454 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:15Z","lastTransitionTime":"2025-12-09T14:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:15 crc kubenswrapper[4770]: I1209 14:23:15.999172 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:15Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.016485 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.056381 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.075654 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.084060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.084099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.084110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.084123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.084134 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.103809 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.116471 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.186358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.186397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.186415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.186431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.186442 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.289495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.289578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.289603 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.289634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.289658 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.392326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.392382 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.392398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.392416 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.392430 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.494424 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.494484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.494500 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.494517 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.494527 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.588184 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.588252 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:16 crc kubenswrapper[4770]: E1209 14:23:16.588344 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.588384 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:16 crc kubenswrapper[4770]: E1209 14:23:16.588537 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:16 crc kubenswrapper[4770]: E1209 14:23:16.588657 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.597149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.597202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.597219 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.597234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.597245 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.700146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.700199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.700210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.700228 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.700240 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.802640 4770 generic.go:334] "Generic (PLEG): container finished" podID="a1d646c4-f044-4dcd-91d5-44034f746659" containerID="87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6" exitCode=0 Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.802722 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerDied","Data":"87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.803078 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.803134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.803158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.803186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.803209 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.823871 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.847377 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.869529 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.881492 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.895510 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.906208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.906243 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.906254 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.906270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.906280 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:16Z","lastTransitionTime":"2025-12-09T14:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.909115 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.923993 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.934358 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.953229 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.968038 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.981181 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:16 crc kubenswrapper[4770]: I1209 14:23:16.993753 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:16Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.005654 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.008774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.008806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.008815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.008829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.008837 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.022196 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.053603 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.111610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.111667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.111677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.111691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.111700 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.214060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.214105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.214121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.214141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.214154 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.316180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.316225 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.316236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.316253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.316266 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.418294 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.418631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.418643 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.418664 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.418676 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.521342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.521395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.521405 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.521419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.521427 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.623856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.623925 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.623940 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.623957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.623969 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.726380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.726450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.726473 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.726499 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.726519 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.809778 4770 generic.go:334] "Generic (PLEG): container finished" podID="a1d646c4-f044-4dcd-91d5-44034f746659" containerID="4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763" exitCode=0 Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.809853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerDied","Data":"4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.815070 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.815425 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.824366 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.829093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.829151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.829163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.829181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.829521 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.837492 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.850126 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.860869 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.872649 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.883708 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.893782 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.902785 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.913930 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.931905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.931944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.931982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.932001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.932012 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:17Z","lastTransitionTime":"2025-12-09T14:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.933195 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.952886 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.965161 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.985032 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:17 crc kubenswrapper[4770]: I1209 14:23:17.995712 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:17Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.006137 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.018557 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.032158 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.034188 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.034217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.034227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.034242 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.034255 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.046868 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.057280 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.087578 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.104506 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.116420 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.128378 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.136470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.136503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.136514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.136528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.136538 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.145614 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.168572 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.186843 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.199380 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.211900 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.227351 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.237413 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.239016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.239059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.239069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.239104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.239113 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.341796 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.341865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.341889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.341917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.341939 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.445114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.445152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.445161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.445176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.445184 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.548811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.548872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.548890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.548915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.548935 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.587611 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.587638 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.587764 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:18 crc kubenswrapper[4770]: E1209 14:23:18.587848 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:18 crc kubenswrapper[4770]: E1209 14:23:18.588060 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:18 crc kubenswrapper[4770]: E1209 14:23:18.588184 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.612817 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.626868 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.642693 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.651823 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.651878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.651893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.651913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.651928 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.656506 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.667211 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.680260 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.692780 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.704155 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.714144 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.748381 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.754085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.754119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.754132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.754150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.754161 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.766511 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.785851 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.786610 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.799497 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.817366 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.817758 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.821137 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.840150 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.844957 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.857212 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.857248 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.857259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.857276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.857290 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.859995 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.873636 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.889313 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.901346 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.916554 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.932325 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.945029 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.958841 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.959873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.959897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.959907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.959924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.959953 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:18Z","lastTransitionTime":"2025-12-09T14:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.969509 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:18 crc kubenswrapper[4770]: I1209 14:23:18.992906 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:18Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.007679 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.021294 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.032032 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.048337 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.063444 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.063481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.063492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.063509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.063520 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.068721 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.165791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.165856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.165869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.165904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.165916 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.268407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.268439 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.268447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.268461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.268469 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.370753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.370783 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.370794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.370808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.370818 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.473155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.473211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.473224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.473240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.473252 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.575601 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.575634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.575642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.575655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.575665 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.677639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.677676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.677684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.677699 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.677709 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.779716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.779796 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.779807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.779826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.779837 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.824352 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.824351 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" event={"ID":"a1d646c4-f044-4dcd-91d5-44034f746659","Type":"ContainerStarted","Data":"95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.841249 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.857047 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.871264 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.882959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.883041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.883059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.883113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.883134 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.886170 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.901460 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.917233 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.927632 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.940644 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.955563 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.979395 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.985287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.985343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.985352 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.985390 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.985402 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:19Z","lastTransitionTime":"2025-12-09T14:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:19 crc kubenswrapper[4770]: I1209 14:23:19.994521 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:19Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.008550 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.020890 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.039805 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.063237 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.088334 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.088374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.088391 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.088411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.088427 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.191406 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.191490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.191501 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.191518 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.191531 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.295637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.295772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.295797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.295830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.295853 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.399469 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.399557 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.399575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.399631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.399648 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.502507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.502568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.502585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.502614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.502631 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.587942 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.587940 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.587967 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:20 crc kubenswrapper[4770]: E1209 14:23:20.588292 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:20 crc kubenswrapper[4770]: E1209 14:23:20.588138 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:20 crc kubenswrapper[4770]: E1209 14:23:20.588460 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.605193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.605238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.605253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.605271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.605286 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.708182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.708230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.708257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.708273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.708286 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.810287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.810331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.810346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.810361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.810373 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.828667 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/0.log" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.831543 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615" exitCode=1 Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.831598 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.832717 4770 scope.go:117] "RemoveContainer" containerID="7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.853944 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.873625 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.888477 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.899159 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.912329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.912354 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.912363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.912375 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.912383 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:20Z","lastTransitionTime":"2025-12-09T14:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.913129 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.931396 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:20Z\\\",\\\"message\\\":\\\"or removal\\\\nI1209 14:23:20.188801 6039 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 14:23:20.189007 6039 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 14:23:20.189057 6039 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 14:23:20.189070 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:20.189083 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:20.189094 6039 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:20.189106 6039 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 14:23:20.189205 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 14:23:20.189272 6039 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:20.189283 6039 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:20.189317 6039 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:20.189334 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:20.189356 6039 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:20.189362 6039 factory.go:656] Stopping watch factory\\\\nI1209 14:23:20.189375 6039 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:20.189377 6039 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.944694 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.957636 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.969784 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.982501 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:20 crc kubenswrapper[4770]: I1209 14:23:20.994961 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:20Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.010357 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.014862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.014899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.014907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.014920 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.014929 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.023805 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.040169 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.055576 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.117799 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.117862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.117880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.117909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.117938 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.219848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.219879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.219887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.219899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.219907 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.322412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.322479 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.322498 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.322525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.322543 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.424900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.424938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.424948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.424964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.424974 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.458511 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.528472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.528551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.528575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.528609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.528633 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.631846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.631962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.631988 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.632020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.632042 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.734195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.734233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.734244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.734260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.734271 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.836340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.836415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.836436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.836465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.836486 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.886411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.886494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.886522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.886551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.886575 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.905475 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2"] Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.906088 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:21 crc kubenswrapper[4770]: E1209 14:23:21.906279 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.911506 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.911544 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.913084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.913126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.913138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.913153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.913167 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: E1209 14:23:21.929250 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.934250 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.934318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.934342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.934375 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.934403 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.934702 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: E1209 14:23:21.954684 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.957552 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.959552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.959993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.960234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.960268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.960536 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.970395 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: E1209 14:23:21.978961 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.982245 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.982278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.982289 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.982304 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.982316 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:21Z","lastTransitionTime":"2025-12-09T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:21 crc kubenswrapper[4770]: I1209 14:23:21.988072 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: E1209 14:23:22.000044 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:21Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: E1209 14:23:22.000158 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.001995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.002036 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.002073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.002094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.002106 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.005076 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.018717 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.028639 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8382884e-8094-4402-aa96-1f40d0b21c24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.028689 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8382884e-8094-4402-aa96-1f40d0b21c24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.028785 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8382884e-8094-4402-aa96-1f40d0b21c24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.028830 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfdmb\" (UniqueName: \"kubernetes.io/projected/8382884e-8094-4402-aa96-1f40d0b21c24-kube-api-access-qfdmb\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.033443 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.050272 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.069971 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.085254 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.105036 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.105099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.105116 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.105140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.105158 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.118192 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.130139 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfdmb\" (UniqueName: \"kubernetes.io/projected/8382884e-8094-4402-aa96-1f40d0b21c24-kube-api-access-qfdmb\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.130323 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8382884e-8094-4402-aa96-1f40d0b21c24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.130443 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8382884e-8094-4402-aa96-1f40d0b21c24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.130653 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8382884e-8094-4402-aa96-1f40d0b21c24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.131586 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8382884e-8094-4402-aa96-1f40d0b21c24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.132068 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8382884e-8094-4402-aa96-1f40d0b21c24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.136386 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.142706 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8382884e-8094-4402-aa96-1f40d0b21c24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.148655 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfdmb\" (UniqueName: \"kubernetes.io/projected/8382884e-8094-4402-aa96-1f40d0b21c24-kube-api-access-qfdmb\") pod \"ovnkube-control-plane-749d76644c-m5jm2\" (UID: \"8382884e-8094-4402-aa96-1f40d0b21c24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.153485 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.170267 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.190177 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.209052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.209100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.209118 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.209142 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.209163 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.221433 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:20Z\\\",\\\"message\\\":\\\"or removal\\\\nI1209 14:23:20.188801 6039 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 14:23:20.189007 6039 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 14:23:20.189057 6039 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 14:23:20.189070 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:20.189083 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:20.189094 6039 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:20.189106 6039 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 14:23:20.189205 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 14:23:20.189272 6039 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:20.189283 6039 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:20.189317 6039 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:20.189334 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:20.189356 6039 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:20.189362 6039 factory.go:656] Stopping watch factory\\\\nI1209 14:23:20.189375 6039 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:20.189377 6039 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:22Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.229501 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.312504 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.312531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.312539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.312553 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.312562 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.415648 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.415693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.415705 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.415746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.415760 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.518516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.518560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.518570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.518585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.518596 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.588022 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.588022 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:22 crc kubenswrapper[4770]: E1209 14:23:22.588174 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.588238 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:22 crc kubenswrapper[4770]: E1209 14:23:22.588350 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:22 crc kubenswrapper[4770]: E1209 14:23:22.588563 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.623201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.623279 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.623295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.623319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.623335 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.726809 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.726873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.726890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.726916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.726934 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.830240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.830292 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.830308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.830333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.830350 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.841105 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" event={"ID":"8382884e-8094-4402-aa96-1f40d0b21c24","Type":"ContainerStarted","Data":"301de30f24605d408a7a9cba09046950c77873c33fc5765c4994677d3680f54c"} Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.935234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.935338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.935366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.935398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:22 crc kubenswrapper[4770]: I1209 14:23:22.935434 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:22Z","lastTransitionTime":"2025-12-09T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.039242 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.039454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.039576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.039651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.039717 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.142697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.142787 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.142802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.142820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.142832 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.244112 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.244287 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.244376 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.244440 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:23:39.244408856 +0000 UTC m=+51.140611062 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.244508 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.244580 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:39.24455849 +0000 UTC m=+51.140760666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.244845 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.244977 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:39.24495839 +0000 UTC m=+51.141160536 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.245765 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.245816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.245832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.245856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.245871 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.345680 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.345747 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.345862 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.345878 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.345889 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.345947 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:39.345930902 +0000 UTC m=+51.242133038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.346037 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.346091 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.346135 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.346244 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:39.346209468 +0000 UTC m=+51.242411644 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.348526 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.348612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.348635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.348661 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.348681 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.452596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.452650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.452660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.452679 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.452691 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.555308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.555343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.555352 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.555364 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.555373 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.658143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.658527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.658548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.658572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.658589 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.761389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.761425 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.761435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.761449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.761458 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.776214 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-b7jh8"] Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.776602 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:23 crc kubenswrapper[4770]: E1209 14:23:23.776655 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.798799 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.816159 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.833237 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.845232 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" event={"ID":"8382884e-8094-4402-aa96-1f40d0b21c24","Type":"ContainerStarted","Data":"ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.845287 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" event={"ID":"8382884e-8094-4402-aa96-1f40d0b21c24","Type":"ContainerStarted","Data":"8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.846844 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/0.log" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.850019 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.850664 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.856116 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.863261 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.863319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.863330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.863349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.863362 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.867804 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.881269 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.898174 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:20Z\\\",\\\"message\\\":\\\"or removal\\\\nI1209 14:23:20.188801 6039 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 14:23:20.189007 6039 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 14:23:20.189057 6039 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 14:23:20.189070 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:20.189083 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:20.189094 6039 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:20.189106 6039 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 14:23:20.189205 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 14:23:20.189272 6039 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:20.189283 6039 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:20.189317 6039 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:20.189334 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:20.189356 6039 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:20.189362 6039 factory.go:656] Stopping watch factory\\\\nI1209 14:23:20.189375 6039 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:20.189377 6039 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.909467 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.921346 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.934902 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.947413 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.952147 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7j5\" (UniqueName: \"kubernetes.io/projected/98b4e85f-5bbb-40a6-a03a-c775e971ed85-kube-api-access-dc7j5\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.952218 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.961597 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.965331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.965363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.965373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.965389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.965785 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:23Z","lastTransitionTime":"2025-12-09T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:23 crc kubenswrapper[4770]: I1209 14:23:23.991346 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:23Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.002452 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.014933 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.026258 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.048240 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.057774 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7j5\" (UniqueName: \"kubernetes.io/projected/98b4e85f-5bbb-40a6-a03a-c775e971ed85-kube-api-access-dc7j5\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.057874 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:24 crc kubenswrapper[4770]: E1209 14:23:24.058082 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:24 crc kubenswrapper[4770]: E1209 14:23:24.058162 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs podName:98b4e85f-5bbb-40a6-a03a-c775e971ed85 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:24.558137182 +0000 UTC m=+36.454339328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs") pod "network-metrics-daemon-b7jh8" (UID: "98b4e85f-5bbb-40a6-a03a-c775e971ed85") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.064997 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.082413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.082622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.082715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.082814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.082896 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.085596 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7j5\" (UniqueName: \"kubernetes.io/projected/98b4e85f-5bbb-40a6-a03a-c775e971ed85-kube-api-access-dc7j5\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.094677 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.110058 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.130258 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.150215 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.165684 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.176205 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.188661 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.193971 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.194020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.194032 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.194052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.194065 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.198952 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.211039 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.236745 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:20Z\\\",\\\"message\\\":\\\"or removal\\\\nI1209 14:23:20.188801 6039 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 14:23:20.189007 6039 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 14:23:20.189057 6039 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 14:23:20.189070 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:20.189083 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:20.189094 6039 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:20.189106 6039 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 14:23:20.189205 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 14:23:20.189272 6039 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:20.189283 6039 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:20.189317 6039 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:20.189334 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:20.189356 6039 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:20.189362 6039 factory.go:656] Stopping watch factory\\\\nI1209 14:23:20.189375 6039 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:20.189377 6039 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.248976 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.265585 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.277521 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.288810 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.296638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.296672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.296681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.296693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.296702 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.302961 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.314860 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:24Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.399010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.399043 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.399051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.399064 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.399073 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.512113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.512168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.512186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.512212 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.512230 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.560480 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:24 crc kubenswrapper[4770]: E1209 14:23:24.560603 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:24 crc kubenswrapper[4770]: E1209 14:23:24.560654 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs podName:98b4e85f-5bbb-40a6-a03a-c775e971ed85 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:25.560639923 +0000 UTC m=+37.456842059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs") pod "network-metrics-daemon-b7jh8" (UID: "98b4e85f-5bbb-40a6-a03a-c775e971ed85") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.587869 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.587988 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.587898 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:24 crc kubenswrapper[4770]: E1209 14:23:24.588168 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:24 crc kubenswrapper[4770]: E1209 14:23:24.588248 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:24 crc kubenswrapper[4770]: E1209 14:23:24.588075 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.614051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.614083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.614093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.614107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.614119 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.716113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.716141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.716151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.716165 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.716173 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.818653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.818692 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.818703 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.818719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.818742 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.923237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.923707 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.923736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.923752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:24 crc kubenswrapper[4770]: I1209 14:23:24.923762 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:24Z","lastTransitionTime":"2025-12-09T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.026371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.026414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.026429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.026443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.026455 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.128478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.128526 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.128539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.128557 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.128568 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.230676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.230713 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.230745 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.230762 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.230775 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.332933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.332978 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.332990 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.333008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.333019 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.434944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.434993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.435008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.435029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.435041 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.537299 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.537348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.537358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.537376 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.537386 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.571883 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:25 crc kubenswrapper[4770]: E1209 14:23:25.572023 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:25 crc kubenswrapper[4770]: E1209 14:23:25.572078 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs podName:98b4e85f-5bbb-40a6-a03a-c775e971ed85 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:27.572062204 +0000 UTC m=+39.468264330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs") pod "network-metrics-daemon-b7jh8" (UID: "98b4e85f-5bbb-40a6-a03a-c775e971ed85") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.588171 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:25 crc kubenswrapper[4770]: E1209 14:23:25.588331 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.639879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.639917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.639930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.639946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.639958 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.741766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.741804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.741813 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.741826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.741835 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.843973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.844022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.844033 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.844047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.844058 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.858241 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/1.log" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.859214 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/0.log" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.861963 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f" exitCode=1 Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.862064 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.862249 4770 scope.go:117] "RemoveContainer" containerID="7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.863513 4770 scope.go:117] "RemoveContainer" containerID="7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f" Dec 09 14:23:25 crc kubenswrapper[4770]: E1209 14:23:25.863812 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.881203 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.895362 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.908970 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.920492 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.938455 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.946174 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.946221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.946244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.946284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.946298 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:25Z","lastTransitionTime":"2025-12-09T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.951121 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.962097 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.978098 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:25 crc kubenswrapper[4770]: I1209 14:23:25.990572 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:25Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.007428 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:26Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.033250 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:20Z\\\",\\\"message\\\":\\\"or removal\\\\nI1209 14:23:20.188801 6039 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 14:23:20.189007 6039 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 14:23:20.189057 6039 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 14:23:20.189070 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:20.189083 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:20.189094 6039 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:20.189106 6039 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 14:23:20.189205 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 14:23:20.189272 6039 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:20.189283 6039 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:20.189317 6039 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:20.189334 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:20.189356 6039 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:20.189362 6039 factory.go:656] Stopping watch factory\\\\nI1209 14:23:20.189375 6039 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:20.189377 6039 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:25Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:24.869621 6213 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:24.869623 6213 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:24.869631 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:24.869646 6213 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:24.869651 6213 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:24.869656 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:24.869680 6213 factory.go:656] Stopping watch factory\\\\nI1209 14:23:24.869694 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:24.869700 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:24.869707 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 14:23:24.869714 6213 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:24.869518 6213 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 14:23:24.869915 6213 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:24.869945 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 14:23:24.870023 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:26Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.046196 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:26Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.049081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.049129 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.049146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.049170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.049186 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.059391 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:26Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.076771 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:26Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.091336 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:26Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.105832 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:26Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.120693 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:26Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.151923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.151972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.151990 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.152015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.152030 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.255650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.256019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.256035 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.256058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.256076 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.358369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.358433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.358449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.358472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.358490 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.461197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.461253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.461269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.461293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.461312 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.563854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.564151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.564258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.564347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.564438 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.587702 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.587714 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.587880 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:26 crc kubenswrapper[4770]: E1209 14:23:26.588436 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:26 crc kubenswrapper[4770]: E1209 14:23:26.588558 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:26 crc kubenswrapper[4770]: E1209 14:23:26.588879 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.666839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.667218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.667412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.667641 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.667898 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.771636 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.771695 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.771706 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.771745 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.771758 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.871173 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/1.log" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.874606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.874852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.874952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.875052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.875119 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.977708 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.977775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.977786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.977804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:26 crc kubenswrapper[4770]: I1209 14:23:26.977816 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:26Z","lastTransitionTime":"2025-12-09T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.081205 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.081272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.081295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.081325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.081348 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.184986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.185068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.185084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.185108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.185120 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.291551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.291606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.291616 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.291631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.291646 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.394883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.395232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.395442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.395604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.395763 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.498351 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.498421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.498446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.498473 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.498488 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.588206 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:27 crc kubenswrapper[4770]: E1209 14:23:27.588426 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.594668 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:27 crc kubenswrapper[4770]: E1209 14:23:27.594842 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:27 crc kubenswrapper[4770]: E1209 14:23:27.594911 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs podName:98b4e85f-5bbb-40a6-a03a-c775e971ed85 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:31.59489362 +0000 UTC m=+43.491095756 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs") pod "network-metrics-daemon-b7jh8" (UID: "98b4e85f-5bbb-40a6-a03a-c775e971ed85") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.601471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.601554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.601569 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.601615 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.601665 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.704376 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.704442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.704457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.704481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.704495 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.807824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.807895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.807908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.807930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.807943 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.911000 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.911069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.911088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.911115 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:27 crc kubenswrapper[4770]: I1209 14:23:27.911127 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:27Z","lastTransitionTime":"2025-12-09T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.014175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.014714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.014812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.014883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.014945 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.118158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.118209 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.118219 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.118237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.118250 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.220873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.220914 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.220928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.220947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.220960 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.323779 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.323835 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.323845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.323860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.323868 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.427383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.427451 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.427470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.427493 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.427510 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.530906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.530979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.530991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.531012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.531028 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.587338 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.587464 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:28 crc kubenswrapper[4770]: E1209 14:23:28.587683 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.587882 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:28 crc kubenswrapper[4770]: E1209 14:23:28.587879 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:28 crc kubenswrapper[4770]: E1209 14:23:28.588023 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.605632 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.618885 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.634052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.634111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.634125 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.634146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.634158 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.636832 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.656957 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:20Z\\\",\\\"message\\\":\\\"or removal\\\\nI1209 14:23:20.188801 6039 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 14:23:20.189007 6039 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 14:23:20.189057 6039 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 14:23:20.189070 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:20.189083 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:20.189094 6039 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:20.189106 6039 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 14:23:20.189205 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 14:23:20.189272 6039 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:20.189283 6039 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:20.189317 6039 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:20.189334 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:20.189356 6039 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:20.189362 6039 factory.go:656] Stopping watch factory\\\\nI1209 14:23:20.189375 6039 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:20.189377 6039 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:25Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:24.869621 6213 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:24.869623 6213 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:24.869631 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:24.869646 6213 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:24.869651 6213 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:24.869656 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:24.869680 6213 factory.go:656] Stopping watch factory\\\\nI1209 14:23:24.869694 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:24.869700 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:24.869707 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 14:23:24.869714 6213 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:24.869518 6213 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 14:23:24.869915 6213 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:24.869945 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 14:23:24.870023 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.673477 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.686915 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.701645 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.720816 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.736373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.736442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.736454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.736477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.736505 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.737499 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.758366 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.774083 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.789547 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.809791 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.830058 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.840059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.840119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.840130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.840153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.840166 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.857122 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.871199 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.885800 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:28Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.944014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.944089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.944110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.944141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:28 crc kubenswrapper[4770]: I1209 14:23:28.944162 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:28Z","lastTransitionTime":"2025-12-09T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.046806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.046847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.046857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.046872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.046885 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.149278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.149323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.149337 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.149352 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.149363 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.252040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.252349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.252437 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.252523 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.252607 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.356474 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.356531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.356544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.356562 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.356576 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.459045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.459322 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.459409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.459497 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.459581 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.563066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.563151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.563182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.563213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.563235 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.588215 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:29 crc kubenswrapper[4770]: E1209 14:23:29.588702 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.665663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.665720 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.665763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.665786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.665802 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.768598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.768656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.768668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.768690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.768704 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.871407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.871454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.871470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.871487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.871498 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.973462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.973508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.973522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.973536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:29 crc kubenswrapper[4770]: I1209 14:23:29.973545 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:29Z","lastTransitionTime":"2025-12-09T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.076311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.076361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.076374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.076396 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.076408 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.179609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.179669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.179681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.179700 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.179713 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.282509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.282547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.282561 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.282577 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.282589 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.384795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.384841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.384855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.384872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.384884 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.487717 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.487829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.487853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.487882 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.487904 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.588188 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.588225 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:30 crc kubenswrapper[4770]: E1209 14:23:30.588362 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.588402 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:30 crc kubenswrapper[4770]: E1209 14:23:30.589091 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:30 crc kubenswrapper[4770]: E1209 14:23:30.589543 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.590377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.590437 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.590458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.590488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.590510 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.693662 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.693775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.693802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.693832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.693854 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.797259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.797318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.797338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.797366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.797387 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.900611 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.900661 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.900672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.900689 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:30 crc kubenswrapper[4770]: I1209 14:23:30.900702 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:30Z","lastTransitionTime":"2025-12-09T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.003195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.003238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.003251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.003274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.003286 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.105920 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.106019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.106036 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.106064 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.106083 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.209078 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.209130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.209147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.209164 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.209176 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.311960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.311992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.312001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.312013 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.312021 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.415772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.415837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.415855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.415881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.415898 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.519274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.519344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.519366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.519399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.519422 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.587962 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:31 crc kubenswrapper[4770]: E1209 14:23:31.588173 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.622596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.622631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.622643 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.622658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.622669 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.635702 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:31 crc kubenswrapper[4770]: E1209 14:23:31.635841 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:31 crc kubenswrapper[4770]: E1209 14:23:31.635913 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs podName:98b4e85f-5bbb-40a6-a03a-c775e971ed85 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:39.635897607 +0000 UTC m=+51.532099743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs") pod "network-metrics-daemon-b7jh8" (UID: "98b4e85f-5bbb-40a6-a03a-c775e971ed85") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.725774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.725848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.725871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.725900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.725923 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.828262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.828328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.828345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.828371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.828388 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.931806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.931868 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.931878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.931899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:31 crc kubenswrapper[4770]: I1209 14:23:31.931914 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:31Z","lastTransitionTime":"2025-12-09T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.034931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.034964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.034972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.034984 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.034993 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.137711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.137773 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.137786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.137803 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.137814 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.214947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.214994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.215005 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.215024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.215038 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.234398 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:32Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.239809 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.239865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.239883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.239906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.239925 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.257010 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:32Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.262652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.262710 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.262767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.262798 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.262820 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.278517 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:32Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.283899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.283938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.283948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.283964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.283976 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.297075 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:32Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.301791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.301834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.301847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.301864 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.301878 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.315527 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:32Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.315688 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.317440 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.317487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.317503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.317524 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.317539 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.420180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.420214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.420221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.420234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.420243 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.523034 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.523095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.523109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.523130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.523144 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.588130 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.588157 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.588406 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.588161 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.588634 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:32 crc kubenswrapper[4770]: E1209 14:23:32.588710 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.625884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.625927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.625938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.625954 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.625968 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.729029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.729119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.729132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.729151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.729166 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.833192 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.833266 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.833283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.833322 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.833337 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.937664 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.937749 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.937762 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.937779 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:32 crc kubenswrapper[4770]: I1209 14:23:32.937790 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:32Z","lastTransitionTime":"2025-12-09T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.039951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.040018 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.040035 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.040056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.040071 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.143232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.143272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.143283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.143299 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.143313 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.246429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.246495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.246553 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.246579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.246679 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.349146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.349185 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.349194 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.349207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.349216 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.452091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.452130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.452139 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.452152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.452161 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.554330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.554379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.554388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.554402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.554412 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.587673 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:33 crc kubenswrapper[4770]: E1209 14:23:33.587882 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.658505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.658618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.658638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.658662 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.658683 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.761988 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.762031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.762040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.762059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.762069 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.864838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.864895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.864906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.864924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.864936 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.967011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.967056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.967067 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.967084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:33 crc kubenswrapper[4770]: I1209 14:23:33.967097 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:33Z","lastTransitionTime":"2025-12-09T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.069924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.069991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.070014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.070045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.070065 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.172710 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.172801 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.172814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.172834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.172846 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.275079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.275122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.275134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.275146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.275156 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.377403 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.377439 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.377450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.377465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.377476 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.479425 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.479481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.479491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.479506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.479517 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.582002 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.582054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.582288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.582321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.582338 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.587595 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:34 crc kubenswrapper[4770]: E1209 14:23:34.587720 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.588241 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.588252 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:34 crc kubenswrapper[4770]: E1209 14:23:34.588334 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:34 crc kubenswrapper[4770]: E1209 14:23:34.588488 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.685324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.685372 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.685384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.685402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.685417 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.788836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.788902 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.788924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.788952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.788977 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.891825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.891875 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.891887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.891907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.891917 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.995042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.995085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.995097 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.995115 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:34 crc kubenswrapper[4770]: I1209 14:23:34.995128 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:34Z","lastTransitionTime":"2025-12-09T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.097774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.097814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.097825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.097840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.097851 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.202422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.202510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.202536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.202570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.202605 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.305180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.305213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.305221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.305235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.305244 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.407888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.407942 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.407958 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.407983 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.408003 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.510593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.510675 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.510702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.510771 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.510822 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.587266 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:35 crc kubenswrapper[4770]: E1209 14:23:35.587441 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.613937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.614003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.614015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.614030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.614040 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.716974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.717004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.717011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.717024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.717032 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.819751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.820089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.820187 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.820268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.820338 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.923669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.923928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.923993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.924095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:35 crc kubenswrapper[4770]: I1209 14:23:35.924174 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:35Z","lastTransitionTime":"2025-12-09T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.026528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.027011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.027116 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.027213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.027305 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.130948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.130993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.131004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.131022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.131034 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.234415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.235149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.235183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.235213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.235232 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.338477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.338533 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.338549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.338571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.338588 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.441345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.441430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.441487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.441517 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.441538 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.544535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.544594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.544608 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.544626 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.544640 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.587390 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:36 crc kubenswrapper[4770]: E1209 14:23:36.587872 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.587561 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.587490 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:36 crc kubenswrapper[4770]: E1209 14:23:36.588356 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:36 crc kubenswrapper[4770]: E1209 14:23:36.588513 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.647135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.647195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.647205 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.647228 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.647244 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.749622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.749678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.749693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.749709 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.749721 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.853263 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.853783 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.853878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.853964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.854062 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.957218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.957291 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.957312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.957338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:36 crc kubenswrapper[4770]: I1209 14:23:36.957352 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:36Z","lastTransitionTime":"2025-12-09T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.060945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.061008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.061019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.061042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.061055 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.164470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.164515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.164526 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.164545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.164554 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.267607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.267642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.267650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.267662 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.267672 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.369694 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.369728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.369738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.369774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.369785 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.477414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.477773 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.478008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.478039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.478058 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.581227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.581275 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.581285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.581304 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.581315 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.587815 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:37 crc kubenswrapper[4770]: E1209 14:23:37.587998 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.684482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.684546 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.684568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.684596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.684619 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.787151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.787194 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.787203 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.787217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.787228 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.889839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.889911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.889924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.889943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.889956 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.991846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.991884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.991893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.991908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:37 crc kubenswrapper[4770]: I1209 14:23:37.991920 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:37Z","lastTransitionTime":"2025-12-09T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.094640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.094676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.094687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.094704 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.094714 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.198212 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.198257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.198294 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.198312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.198323 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.300873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.300943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.300954 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.300970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.300980 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.404367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.404412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.404422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.404437 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.404449 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.507453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.507528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.507550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.507577 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.507601 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.587291 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.587390 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.587398 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:38 crc kubenswrapper[4770]: E1209 14:23:38.587515 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:38 crc kubenswrapper[4770]: E1209 14:23:38.587892 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:38 crc kubenswrapper[4770]: E1209 14:23:38.588371 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.588912 4770 scope.go:117] "RemoveContainer" containerID="7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.611241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.611295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.611334 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.611361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.611376 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.620880 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.635423 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.648121 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.664857 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.677900 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.695148 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.714915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.714959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.714970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.714989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.715001 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.721772 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7345835c48788a90c17015caff58e4533da7f34bc013e07b94436988d5fc7615\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:20Z\\\",\\\"message\\\":\\\"or removal\\\\nI1209 14:23:20.188801 6039 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 14:23:20.189007 6039 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 14:23:20.189057 6039 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 14:23:20.189070 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:20.189083 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:20.189094 6039 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:20.189106 6039 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 14:23:20.189205 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 14:23:20.189272 6039 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:20.189283 6039 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:20.189317 6039 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:20.189334 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:20.189356 6039 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:20.189362 6039 factory.go:656] Stopping watch factory\\\\nI1209 14:23:20.189375 6039 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:20.189377 6039 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:25Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:24.869621 6213 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:24.869623 6213 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:24.869631 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:24.869646 6213 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:24.869651 6213 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:24.869656 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:24.869680 6213 factory.go:656] Stopping watch factory\\\\nI1209 14:23:24.869694 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:24.869700 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:24.869707 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 14:23:24.869714 6213 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:24.869518 6213 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 14:23:24.869915 6213 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:24.869945 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 14:23:24.870023 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.737105 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.755020 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.769524 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.784215 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.800992 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.814682 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.817858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.817906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.817919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.817938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.817951 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.828611 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.846247 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.867116 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.884826 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.901223 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.914696 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/1.log" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.916924 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.917640 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.918277 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.919834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.919876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.919887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.919901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.919910 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:38Z","lastTransitionTime":"2025-12-09T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.929322 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.942928 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.955396 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.970661 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.984186 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:38 crc kubenswrapper[4770]: I1209 14:23:38.999189 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:38Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.010539 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.022687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.022877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.022901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.022914 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.022923 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.032969 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.061124 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.084037 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.097037 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.118369 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:25Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:24.869621 6213 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:24.869623 6213 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:24.869631 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:24.869646 6213 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:24.869651 6213 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:24.869656 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:24.869680 6213 factory.go:656] Stopping watch factory\\\\nI1209 14:23:24.869694 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:24.869700 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:24.869707 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 14:23:24.869714 6213 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:24.869518 6213 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 14:23:24.869915 6213 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:24.869945 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 14:23:24.870023 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.124887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.124926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.124936 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.124950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.124961 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.131040 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.144380 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.162426 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.178325 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.194467 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.206426 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.221400 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.227446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.227490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.227503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.227520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.227532 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.234557 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.256522 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.270182 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.284983 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.303972 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:25Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:24.869621 6213 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:24.869623 6213 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:24.869631 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:24.869646 6213 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:24.869651 6213 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:24.869656 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:24.869680 6213 factory.go:656] Stopping watch factory\\\\nI1209 14:23:24.869694 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:24.869700 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:24.869707 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 14:23:24.869714 6213 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:24.869518 6213 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 14:23:24.869915 6213 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:24.869945 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 14:23:24.870023 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.317047 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.317179 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.317199 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:24:11.317180228 +0000 UTC m=+83.213382364 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.317242 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.317250 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.317284 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:24:11.31727645 +0000 UTC m=+83.213478586 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.317344 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.317379 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:24:11.317369382 +0000 UTC m=+83.213571528 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.320217 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.329317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.329352 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.329362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.329377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.329388 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.331772 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.347073 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.363487 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.378270 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.392086 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.405248 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.418587 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.418659 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.418845 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.418886 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.418843 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.418916 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.418925 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.418943 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.418999 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 14:24:11.41897516 +0000 UTC m=+83.315177296 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.419021 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 14:24:11.419014161 +0000 UTC m=+83.315216297 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.421507 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.431702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.431764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.431773 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.431787 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.431797 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.534447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.534539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.534554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.534587 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.534605 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.588355 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.588596 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.637581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.637714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.638122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.638160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.638175 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.722806 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.723102 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.723271 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs podName:98b4e85f-5bbb-40a6-a03a-c775e971ed85 nodeName:}" failed. No retries permitted until 2025-12-09 14:23:55.723235819 +0000 UTC m=+67.619438155 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs") pod "network-metrics-daemon-b7jh8" (UID: "98b4e85f-5bbb-40a6-a03a-c775e971ed85") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.742246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.742313 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.742341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.742380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.742398 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.845551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.845605 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.845619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.845638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.845650 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.922809 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/2.log" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.923652 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/1.log" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.926516 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd" exitCode=1 Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.926556 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.926602 4770 scope.go:117] "RemoveContainer" containerID="7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.927341 4770 scope.go:117] "RemoveContainer" containerID="cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd" Dec 09 14:23:39 crc kubenswrapper[4770]: E1209 14:23:39.927523 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.944988 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.948556 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.948585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.948594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.948609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.948619 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:39Z","lastTransitionTime":"2025-12-09T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.972445 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7102878df9cb95ab061e5beb2852f9716e3cf32f1faa988a5b7928e81ccfc64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:25Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:24.869621 6213 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 14:23:24.869623 6213 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 14:23:24.869631 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 14:23:24.869646 6213 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 14:23:24.869651 6213 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 14:23:24.869656 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 14:23:24.869680 6213 factory.go:656] Stopping watch factory\\\\nI1209 14:23:24.869694 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 14:23:24.869700 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 14:23:24.869707 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 14:23:24.869714 6213 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 14:23:24.869518 6213 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 14:23:24.869915 6213 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:24.869945 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 14:23:24.870023 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.985634 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:39 crc kubenswrapper[4770]: I1209 14:23:39.998463 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:39Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.014260 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.028189 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.043547 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.051350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.051614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.051717 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.051829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.051904 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.059088 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.075899 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.088968 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.102223 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.116172 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.126849 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.140555 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.154361 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.154779 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.154892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.154995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.155091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.155177 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.175116 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.188036 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.257250 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.257596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.257715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.257896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.258056 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.360439 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.360482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.360494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.360510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.360522 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.462772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.462812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.462823 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.462839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.462847 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.566051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.566136 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.566166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.566199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.566218 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.587950 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.588001 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:40 crc kubenswrapper[4770]: E1209 14:23:40.588147 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.588195 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:40 crc kubenswrapper[4770]: E1209 14:23:40.588309 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:40 crc kubenswrapper[4770]: E1209 14:23:40.588419 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.668708 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.668927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.668948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.668970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.668987 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.771411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.771454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.771470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.771491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.771506 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.874035 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.874103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.874120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.874143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.874162 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.932287 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/2.log" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.935913 4770 scope.go:117] "RemoveContainer" containerID="cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd" Dec 09 14:23:40 crc kubenswrapper[4770]: E1209 14:23:40.936096 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.951347 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.977043 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.977100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.977114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.977130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.977148 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:40Z","lastTransitionTime":"2025-12-09T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:40 crc kubenswrapper[4770]: I1209 14:23:40.985921 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.000963 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:40Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.019437 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.040603 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.053098 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.064172 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.075640 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.079531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.079561 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.079570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.079583 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.079594 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.088916 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.098239 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.108631 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.121319 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.135390 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.148417 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.160528 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.174285 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.182330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.182369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.182379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.182395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.182406 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.188586 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:41Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.284775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.284813 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.284828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.284844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.284854 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.388173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.388251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.388273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.388302 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.388324 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.491836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.491915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.491952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.491986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.492020 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.587486 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:41 crc kubenswrapper[4770]: E1209 14:23:41.587677 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.594429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.594515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.594528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.594550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.594565 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.697799 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.697865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.697875 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.697896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.697907 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.801633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.801688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.801696 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.801712 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.801721 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.904028 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.904249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.904340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.904402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:41 crc kubenswrapper[4770]: I1209 14:23:41.904477 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:41Z","lastTransitionTime":"2025-12-09T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.006452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.006789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.006889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.007114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.007211 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.110373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.110452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.110467 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.110486 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.110499 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.213496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.213808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.213932 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.214047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.214158 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.261046 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.272960 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.273974 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.286187 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.301525 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.316713 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.316772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.316782 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.316801 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.316812 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.325360 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.340783 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.352854 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.365108 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.387353 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.407124 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.420468 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.420550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.420592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.420626 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.420651 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.426457 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.447098 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.463245 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.476761 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.489042 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.509236 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.523494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.523586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.523611 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.523661 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.523683 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.525546 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.536512 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.550176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.550215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.550223 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.550236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.550244 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.562449 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.566173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.566218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.566231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.566247 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.566259 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.584994 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.587977 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.588006 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.588094 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.588231 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.588261 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.588303 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.589123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.589154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.589167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.589182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.589194 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.600896 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.604475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.604523 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.604536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.604552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.604563 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.616136 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.620213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.620253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.620265 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.620281 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.620293 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.633437 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:42Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:42 crc kubenswrapper[4770]: E1209 14:23:42.633557 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.635084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.635121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.635132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.635149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.635158 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.737091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.737154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.737170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.737188 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.737202 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.840062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.840138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.840158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.840183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.840201 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.942455 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.942493 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.942516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.942534 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:42 crc kubenswrapper[4770]: I1209 14:23:42.942546 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:42Z","lastTransitionTime":"2025-12-09T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.044594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.044667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.044694 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.044763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.044790 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.147547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.147596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.147605 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.147623 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.147633 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.250825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.250881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.250894 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.250911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.250923 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.354509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.354547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.354555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.354570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.354579 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.457427 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.457546 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.457572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.457603 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.457624 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.561132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.561193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.561208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.561226 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.561238 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.588229 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:43 crc kubenswrapper[4770]: E1209 14:23:43.588383 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.663804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.663841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.663853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.663869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.663878 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.766274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.766325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.766337 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.766359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.766371 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.868909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.868990 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.869016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.869043 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.869064 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.971933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.972000 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.972019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.972045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:43 crc kubenswrapper[4770]: I1209 14:23:43.972064 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:43Z","lastTransitionTime":"2025-12-09T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.110146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.110193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.110202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.110217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.110226 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.212831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.212880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.212892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.212909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.212922 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.315863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.315901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.315909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.315923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.315931 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.418347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.418386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.418396 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.418411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.418425 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.521211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.521246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.521255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.521270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.521283 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.587631 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.587683 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.587645 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:44 crc kubenswrapper[4770]: E1209 14:23:44.587815 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:44 crc kubenswrapper[4770]: E1209 14:23:44.587878 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:44 crc kubenswrapper[4770]: E1209 14:23:44.587956 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.624107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.624166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.624181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.624202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.624219 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.726929 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.726967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.726977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.726995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.727005 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.831100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.831175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.831201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.831234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.831257 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.933695 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.933859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.933876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.933895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:44 crc kubenswrapper[4770]: I1209 14:23:44.933908 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:44Z","lastTransitionTime":"2025-12-09T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.037094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.037137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.037148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.037163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.037174 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.139764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.139833 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.139850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.139874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.139891 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.243826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.243902 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.243924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.243950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.243968 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.346536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.346600 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.346617 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.346671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.346691 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.449448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.449505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.449523 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.449548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.449565 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.551987 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.552100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.552119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.552143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.552161 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.588165 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:45 crc kubenswrapper[4770]: E1209 14:23:45.588314 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.656015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.656137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.656154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.656176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.656189 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.758521 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.758566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.758577 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.758597 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.758613 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.861158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.861197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.861207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.861222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.861234 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.963179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.963234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.963250 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.963273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:45 crc kubenswrapper[4770]: I1209 14:23:45.963289 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:45Z","lastTransitionTime":"2025-12-09T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.065490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.065535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.065544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.065556 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.065564 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.168062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.168146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.168165 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.168188 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.168204 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.271666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.271749 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.271766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.271783 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.271796 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.374802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.374850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.374858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.374878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.374888 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.477388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.477449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.477456 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.477471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.477480 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.579458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.579519 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.579549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.579580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.579603 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.587809 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.587912 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:46 crc kubenswrapper[4770]: E1209 14:23:46.587966 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:46 crc kubenswrapper[4770]: E1209 14:23:46.588022 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.588097 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:46 crc kubenswrapper[4770]: E1209 14:23:46.588249 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.683371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.683409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.683420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.683435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.683446 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.786780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.786871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.786896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.786926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.786971 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.893927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.894002 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.894020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.894045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.894077 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.996667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.996719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.996755 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.996775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:46 crc kubenswrapper[4770]: I1209 14:23:46.996788 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:46Z","lastTransitionTime":"2025-12-09T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.099287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.099325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.099336 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.099348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.099358 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.201286 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.201342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.201359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.201380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.201397 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.304365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.304435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.304458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.304487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.304509 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.407095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.407136 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.407171 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.407190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.407201 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.510079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.510131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.510147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.510170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.510187 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.587970 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:47 crc kubenswrapper[4770]: E1209 14:23:47.588157 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.612955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.612999 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.613010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.613025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.613037 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.716800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.716855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.716868 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.716887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.716899 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.821330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.821397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.821415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.821438 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.821455 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.924182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.924262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.924287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.924316 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:47 crc kubenswrapper[4770]: I1209 14:23:47.924340 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:47Z","lastTransitionTime":"2025-12-09T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.027249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.027305 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.027320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.027338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.027351 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.129393 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.129435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.129447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.129463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.129476 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.231650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.231687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.231696 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.231708 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.231717 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.334800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.334872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.334887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.334912 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.334928 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.437174 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.437238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.437262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.437293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.437314 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.539285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.539327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.539340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.539359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.539370 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.587977 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.588039 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.588056 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:48 crc kubenswrapper[4770]: E1209 14:23:48.588128 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:48 crc kubenswrapper[4770]: E1209 14:23:48.588274 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:48 crc kubenswrapper[4770]: E1209 14:23:48.588548 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.608923 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.625479 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.642145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.642197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.642214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.642239 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.642258 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.655140 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.666538 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.683769 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.704943 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.716494 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.732047 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.744766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.744809 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.744826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.744850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.744869 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.745184 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.760808 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.774307 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.790869 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.808157 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.826307 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.840904 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38c408-15a5-484c-b16f-27a123fdc5ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8d48bca138e010498dc1d434a1efa683e67f2176336efa0ce20ca32f4e27c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb50c0d2ebf1a5fb054060797889366776d97a07ede4ab2c1502e8769d6d8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdb3d7b981af632cfe211bf77cec39078183e1bf00340512775e3187ba7a260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.847092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.847137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.847146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.847166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.847180 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.856628 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.873131 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.886858 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:48Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.950388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.950426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.950434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.950449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:48 crc kubenswrapper[4770]: I1209 14:23:48.950458 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:48Z","lastTransitionTime":"2025-12-09T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.053631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.053715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.053754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.053775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.053814 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.156610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.156673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.156689 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.156711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.156750 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.259218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.259260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.259268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.259282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.259293 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.361641 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.361689 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.361701 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.361719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.361759 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.464515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.464580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.464596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.464622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.464639 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.567049 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.567096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.567107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.567122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.567131 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.587972 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:49 crc kubenswrapper[4770]: E1209 14:23:49.588892 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.669853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.669895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.669906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.669921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.669930 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.772248 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.772314 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.772332 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.772356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.772373 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.876079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.876134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.876145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.876165 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.876179 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.978300 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.978344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.978353 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.978369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:49 crc kubenswrapper[4770]: I1209 14:23:49.978380 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:49Z","lastTransitionTime":"2025-12-09T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.081350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.081393 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.081402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.081417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.081426 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.184748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.184783 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.184791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.184805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.184814 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.287260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.287332 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.287342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.287374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.287385 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.389919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.389984 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.390001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.390027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.390045 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.493828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.494255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.494460 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.494685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.494944 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.588107 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:50 crc kubenswrapper[4770]: E1209 14:23:50.588473 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.588188 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:50 crc kubenswrapper[4770]: E1209 14:23:50.588902 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.588108 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:50 crc kubenswrapper[4770]: E1209 14:23:50.589752 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.597686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.597712 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.597721 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.597748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.597760 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.700924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.701104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.701137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.701217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.701307 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.804946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.805415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.805574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.805724 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.805916 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.910189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.910237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.910249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.910265 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:50 crc kubenswrapper[4770]: I1209 14:23:50.910278 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:50Z","lastTransitionTime":"2025-12-09T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.012962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.013044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.013058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.013083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.013101 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.117178 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.117235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.117246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.117268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.117280 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.221518 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.221619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.221641 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.221663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.221675 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.326041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.326093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.326103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.326126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.326141 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.428897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.428949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.428965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.428988 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.429003 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.533005 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.533590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.533699 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.533832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.533920 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.587576 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:51 crc kubenswrapper[4770]: E1209 14:23:51.587831 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.637547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.637604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.637614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.637637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.637650 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.741046 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.741096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.741109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.741134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.741153 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.843828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.843888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.843901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.843925 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.843938 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.946711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.946803 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.946818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.946853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:51 crc kubenswrapper[4770]: I1209 14:23:51.946869 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:51Z","lastTransitionTime":"2025-12-09T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.050614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.050652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.050660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.050681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.050691 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.154192 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.154251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.154266 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.154288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.154303 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.257358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.257399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.257408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.257426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.257438 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.360439 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.360513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.360525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.360545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.360556 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.463979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.464083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.464099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.464130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.464149 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.567391 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.567450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.567462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.567484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.567501 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.587834 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.587938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.588033 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.588119 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.588304 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.588428 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.658039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.658090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.658104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.658126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.658140 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.671626 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:52Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.676619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.676685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.676706 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.676756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.676774 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.692685 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:52Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.697306 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.697363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.697375 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.697394 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.697407 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.714012 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:52Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.719507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.719536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.719544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.719577 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.719590 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.735269 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:52Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.739626 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.739686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.739713 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.739776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.739795 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.753623 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:52Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:52 crc kubenswrapper[4770]: E1209 14:23:52.753844 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.756203 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.756259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.756272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.756292 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.756305 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.859763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.859822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.859836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.859858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.859871 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.963025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.963074 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.963085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.963101 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:52 crc kubenswrapper[4770]: I1209 14:23:52.963113 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:52Z","lastTransitionTime":"2025-12-09T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.066375 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.066425 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.066438 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.066464 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.066476 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.168567 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.168619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.168631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.168653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.168668 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.271673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.271740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.271752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.271775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.271789 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.374781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.374866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.374886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.374913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.374952 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.477757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.477820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.477832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.477855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.477867 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.581008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.581087 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.581100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.581119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.581131 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.587504 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:53 crc kubenswrapper[4770]: E1209 14:23:53.587712 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.683815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.683874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.683886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.683905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.683917 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.787578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.787665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.787690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.787758 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.787791 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.890081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.890136 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.890146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.890160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.890169 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.992618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.992654 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.992663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.992678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:53 crc kubenswrapper[4770]: I1209 14:23:53.992688 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:53Z","lastTransitionTime":"2025-12-09T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.095586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.095649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.095674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.095702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.095718 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.198620 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.198961 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.198977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.199013 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.199023 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.301271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.301313 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.301324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.301340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.301353 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.403941 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.403991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.404003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.404023 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.404038 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.508226 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.508311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.508323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.508345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.508357 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.587534 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.587669 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:54 crc kubenswrapper[4770]: E1209 14:23:54.587789 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.587839 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:54 crc kubenswrapper[4770]: E1209 14:23:54.587924 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:54 crc kubenswrapper[4770]: E1209 14:23:54.588071 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.610766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.610823 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.610842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.610863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.610880 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.713612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.713738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.713753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.713790 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.713803 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.816868 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.816942 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.816955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.816980 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.817015 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.919762 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.919836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.919856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.919881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:54 crc kubenswrapper[4770]: I1209 14:23:54.919899 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:54Z","lastTransitionTime":"2025-12-09T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.022252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.022346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.022361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.022390 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.022407 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.125927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.125977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.125985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.126003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.126015 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.228930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.228992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.229003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.229024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.229036 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.331963 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.332013 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.332025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.332044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.332055 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.434816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.434876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.434885 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.434906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.434917 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.537693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.537792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.537807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.537830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.537845 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.587703 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:55 crc kubenswrapper[4770]: E1209 14:23:55.588099 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.589098 4770 scope.go:117] "RemoveContainer" containerID="cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd" Dec 09 14:23:55 crc kubenswrapper[4770]: E1209 14:23:55.589328 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.641033 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.641089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.641099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.641119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.641134 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.743948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.744009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.744020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.744041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.744055 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.797105 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:55 crc kubenswrapper[4770]: E1209 14:23:55.797388 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:55 crc kubenswrapper[4770]: E1209 14:23:55.797526 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs podName:98b4e85f-5bbb-40a6-a03a-c775e971ed85 nodeName:}" failed. No retries permitted until 2025-12-09 14:24:27.797491661 +0000 UTC m=+99.693693797 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs") pod "network-metrics-daemon-b7jh8" (UID: "98b4e85f-5bbb-40a6-a03a-c775e971ed85") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.846308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.846362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.846379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.846401 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.846415 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.949326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.949370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.949381 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.949398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:55 crc kubenswrapper[4770]: I1209 14:23:55.949410 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:55Z","lastTransitionTime":"2025-12-09T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.052868 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.052921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.052933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.052952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.052964 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.156443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.156503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.156515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.156535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.156549 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.259586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.259642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.259658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.259679 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.259693 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.362189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.362230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.362241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.362258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.362270 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.465871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.465928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.465948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.465982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.465999 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.569232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.569302 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.569349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.569374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.569390 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.588172 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.588248 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.588198 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:56 crc kubenswrapper[4770]: E1209 14:23:56.588416 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:56 crc kubenswrapper[4770]: E1209 14:23:56.588473 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:56 crc kubenswrapper[4770]: E1209 14:23:56.588543 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.672530 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.672574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.672583 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.672602 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.672611 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.775791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.775838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.775847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.775865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.775877 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.879052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.879102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.879112 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.879128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.879138 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.982085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.982130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.982141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.982161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.982175 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:56Z","lastTransitionTime":"2025-12-09T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.986434 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/0.log" Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.986480 4770 generic.go:334] "Generic (PLEG): container finished" podID="c38553c5-6cc9-435b-8c52-3262b861d1cf" containerID="08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed" exitCode=1 Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.986515 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h5dw2" event={"ID":"c38553c5-6cc9-435b-8c52-3262b861d1cf","Type":"ContainerDied","Data":"08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed"} Dec 09 14:23:56 crc kubenswrapper[4770]: I1209 14:23:56.987076 4770 scope.go:117] "RemoveContainer" containerID="08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.011182 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.030450 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.043524 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.059954 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.075968 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38c408-15a5-484c-b16f-27a123fdc5ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8d48bca138e010498dc1d434a1efa683e67f2176336efa0ce20ca32f4e27c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb50c0d2ebf1a5fb054060797889366776d97a07ede4ab2c1502e8769d6d8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdb3d7b981af632cfe211bf77cec39078183e1bf00340512775e3187ba7a260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.085575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.085622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.085632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.085645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.085656 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.101380 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.118275 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.129949 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.149143 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.162555 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.175110 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.187613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.187683 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.187693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.187705 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.187714 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.191549 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.205794 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"2025-12-09T14:23:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590\\\\n2025-12-09T14:23:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590 to /host/opt/cni/bin/\\\\n2025-12-09T14:23:11Z [verbose] multus-daemon started\\\\n2025-12-09T14:23:11Z [verbose] Readiness Indicator file check\\\\n2025-12-09T14:23:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.218846 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.230995 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.245031 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.260634 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.275391 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:57Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.289966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.289994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.290023 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.290037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.290047 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.392072 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.392100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.392108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.392120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.392129 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.494399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.494462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.494472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.494488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.494499 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.587406 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:57 crc kubenswrapper[4770]: E1209 14:23:57.587554 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.597595 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.597637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.597649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.597667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.597679 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.699789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.699880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.699899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.699920 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.699936 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.802869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.802926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.802937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.802952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.802962 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.904840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.904880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.904892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.904908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:57 crc kubenswrapper[4770]: I1209 14:23:57.904918 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:57Z","lastTransitionTime":"2025-12-09T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.007307 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.007350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.007361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.007377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.007388 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.109488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.109548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.109567 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.109590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.109607 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.212075 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.212118 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.212130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.212146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.212157 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.315056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.315108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.315128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.315151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.315169 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.418255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.418296 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.418308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.418325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.418339 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.522285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.522347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.522357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.522378 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.522392 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.588308 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.588364 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.588511 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:23:58 crc kubenswrapper[4770]: E1209 14:23:58.588675 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:23:58 crc kubenswrapper[4770]: E1209 14:23:58.588790 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:23:58 crc kubenswrapper[4770]: E1209 14:23:58.588939 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.611474 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.626890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.626918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.626927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.626939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.626948 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.631458 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.644207 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.655845 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.665491 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.679330 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.715590 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.729370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.729424 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.729435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.729458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.729473 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.761973 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.785436 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.799505 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.812784 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.828268 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.832497 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.832525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.832535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.832549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.832560 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.844556 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"2025-12-09T14:23:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590\\\\n2025-12-09T14:23:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590 to /host/opt/cni/bin/\\\\n2025-12-09T14:23:11Z [verbose] multus-daemon started\\\\n2025-12-09T14:23:11Z [verbose] Readiness Indicator file check\\\\n2025-12-09T14:23:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.864036 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.876634 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.892804 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.907253 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38c408-15a5-484c-b16f-27a123fdc5ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8d48bca138e010498dc1d434a1efa683e67f2176336efa0ce20ca32f4e27c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb50c0d2ebf1a5fb054060797889366776d97a07ede4ab2c1502e8769d6d8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdb3d7b981af632cfe211bf77cec39078183e1bf00340512775e3187ba7a260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.923175 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:58Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.934915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.934972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.934984 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.935003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.935015 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:58Z","lastTransitionTime":"2025-12-09T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.997202 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/0.log" Dec 09 14:23:58 crc kubenswrapper[4770]: I1209 14:23:58.997290 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h5dw2" event={"ID":"c38553c5-6cc9-435b-8c52-3262b861d1cf","Type":"ContainerStarted","Data":"a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.013303 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.028394 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.037884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.037907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.037915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.037930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.037940 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.042572 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.058964 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"2025-12-09T14:23:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590\\\\n2025-12-09T14:23:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590 to /host/opt/cni/bin/\\\\n2025-12-09T14:23:11Z [verbose] multus-daemon started\\\\n2025-12-09T14:23:11Z [verbose] Readiness Indicator file check\\\\n2025-12-09T14:23:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.072512 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.085972 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.099388 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.110970 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38c408-15a5-484c-b16f-27a123fdc5ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8d48bca138e010498dc1d434a1efa683e67f2176336efa0ce20ca32f4e27c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb50c0d2ebf1a5fb054060797889366776d97a07ede4ab2c1502e8769d6d8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdb3d7b981af632cfe211bf77cec39078183e1bf00340512775e3187ba7a260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.123949 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.140899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.140963 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.140976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.141002 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.141024 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.144665 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.156737 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.178332 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.190777 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.203088 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.215125 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.225920 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.240994 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.242797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.242839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.242886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.242904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.242916 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.259656 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:23:59Z is after 2025-08-24T17:21:41Z" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.344991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.345043 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.345052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.345068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.345077 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.447412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.447454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.447465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.447481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.447494 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.549505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.549537 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.549548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.549563 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.549573 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.588197 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:23:59 crc kubenswrapper[4770]: E1209 14:23:59.588322 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.652515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.652574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.652589 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.652619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.652635 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.756186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.756221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.756231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.756245 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.756254 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.858517 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.858562 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.858572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.858587 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.858599 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.961029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.961079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.961088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.961100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:23:59 crc kubenswrapper[4770]: I1209 14:23:59.961109 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:23:59Z","lastTransitionTime":"2025-12-09T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.063872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.063910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.063918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.063935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.063945 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.165854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.166146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.166288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.166681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.166829 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.268857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.268892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.268903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.268918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.268929 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.372257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.372347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.372363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.372387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.372402 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.475307 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.475378 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.475396 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.475422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.475447 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.577023 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.577145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.577156 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.577170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.577182 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.587217 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.587308 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.587354 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:00 crc kubenswrapper[4770]: E1209 14:24:00.587432 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:00 crc kubenswrapper[4770]: E1209 14:24:00.587373 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:00 crc kubenswrapper[4770]: E1209 14:24:00.587555 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.680105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.680175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.680198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.680227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.680249 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.782801 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.782845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.782856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.782871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.782880 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.884854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.885420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.885500 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.885570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.885659 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.988666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.988786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.988802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.988818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:00 crc kubenswrapper[4770]: I1209 14:24:00.988829 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:00Z","lastTransitionTime":"2025-12-09T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.091416 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.091454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.091461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.091474 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.091483 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.194244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.194291 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.194303 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.194319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.194333 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.297141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.297190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.297204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.297225 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.297242 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.399939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.399975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.399986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.400001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.400011 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.502874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.502923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.502939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.502962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.502978 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.587600 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:01 crc kubenswrapper[4770]: E1209 14:24:01.587834 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.606011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.606087 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.606113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.606144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.606170 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.709283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.709348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.709365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.709386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.709403 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.812269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.812312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.812355 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.812374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.812388 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.949270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.949316 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.949326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.949339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:01 crc kubenswrapper[4770]: I1209 14:24:01.949348 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:01Z","lastTransitionTime":"2025-12-09T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.052784 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.052850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.052866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.052891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.052912 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.154697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.154768 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.154785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.154803 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.154814 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.257090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.257118 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.257127 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.257140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.257150 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.359603 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.359642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.359649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.359663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.359673 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.462718 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.462831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.462854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.462884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.462906 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.565748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.565831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.565842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.565857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.565872 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.587520 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.587550 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.587526 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.587644 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.587704 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.587959 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.668873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.668941 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.668965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.668987 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.669003 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.771775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.771828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.771843 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.771873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.771888 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.801306 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.801340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.801350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.801366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.801378 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.818263 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:02Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.823017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.823050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.823061 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.823076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.823086 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.842823 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:02Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.856878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.856924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.856932 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.856946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.856955 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.871997 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:02Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.875364 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.875413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.875425 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.875445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.875457 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.893605 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:02Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.898368 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.898407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.898417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.898433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.898445 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.914166 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:02Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:02 crc kubenswrapper[4770]: E1209 14:24:02.914347 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.915987 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.916008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.916016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.916030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:02 crc kubenswrapper[4770]: I1209 14:24:02.916040 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:02Z","lastTransitionTime":"2025-12-09T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.018088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.018133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.018144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.018168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.018181 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.122153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.122232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.122249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.122274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.122291 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.225575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.225644 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.225665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.225690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.225709 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.329972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.330026 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.330044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.330066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.330083 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.433012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.433050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.433063 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.433079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.433090 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.535454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.535505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.535520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.535539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.535554 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.587766 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:03 crc kubenswrapper[4770]: E1209 14:24:03.587909 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.637653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.637686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.637694 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.637708 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.637717 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.740863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.740917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.740929 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.740947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.740961 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.843244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.843285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.843296 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.843312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.843324 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.946134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.946219 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.946231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.946254 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:03 crc kubenswrapper[4770]: I1209 14:24:03.946282 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:03Z","lastTransitionTime":"2025-12-09T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.049169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.049228 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.049241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.049265 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.049277 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.152000 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.152063 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.152073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.152094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.152106 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.254839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.254883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.254894 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.254910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.254921 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.357150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.357202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.357214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.357230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.357242 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.459950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.460002 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.460015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.460030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.460042 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.564791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.564954 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.564994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.565027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.565068 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.588094 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.588160 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.588227 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:04 crc kubenswrapper[4770]: E1209 14:24:04.588347 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:04 crc kubenswrapper[4770]: E1209 14:24:04.588534 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:04 crc kubenswrapper[4770]: E1209 14:24:04.588700 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.667604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.667638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.667651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.667666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.667677 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.770071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.770108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.770117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.770132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.770143 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.873481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.873554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.873575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.873607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.873629 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.976870 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.976944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.976981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.977013 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:04 crc kubenswrapper[4770]: I1209 14:24:04.977037 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:04Z","lastTransitionTime":"2025-12-09T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.079200 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.079269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.079288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.079312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.079330 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.181369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.181398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.181407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.181421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.181430 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.284507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.284583 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.284609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.284639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.284662 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.387335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.387403 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.387428 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.387496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.387523 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.490927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.491000 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.491021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.491048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.491068 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.587373 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:05 crc kubenswrapper[4770]: E1209 14:24:05.587645 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.593642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.593704 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.593770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.593800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.593823 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.696197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.696252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.696269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.696293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.696310 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.799508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.799607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.799632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.799655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.799666 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.903275 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.903317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.903328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.903350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:05 crc kubenswrapper[4770]: I1209 14:24:05.903362 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:05Z","lastTransitionTime":"2025-12-09T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.006935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.007004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.007029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.007058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.007081 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.110300 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.110335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.110345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.110360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.110372 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.213161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.213216 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.213231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.213253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.213265 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.315217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.315264 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.315279 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.315301 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.315315 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.417812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.417859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.417869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.417886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.417896 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.520053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.520113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.520135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.520196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.520209 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.587834 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.587934 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:06 crc kubenswrapper[4770]: E1209 14:24:06.588004 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:06 crc kubenswrapper[4770]: E1209 14:24:06.588080 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.588160 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:06 crc kubenswrapper[4770]: E1209 14:24:06.588305 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.621967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.622020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.622038 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.622068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.622087 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.724420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.724471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.724487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.724506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.724522 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.827048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.827193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.827216 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.827247 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.827268 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.933244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.933320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.933343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.933371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:06 crc kubenswrapper[4770]: I1209 14:24:06.933389 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:06Z","lastTransitionTime":"2025-12-09T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.036227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.036301 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.036323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.036344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.036361 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.140039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.140094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.140108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.140132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.140149 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.243021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.243073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.243085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.243104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.243120 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.346255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.346316 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.346338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.346364 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.346386 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.448459 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.448518 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.448532 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.448549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.448560 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.550816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.550855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.550866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.550882 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.550894 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.587195 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:07 crc kubenswrapper[4770]: E1209 14:24:07.587344 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.653365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.653420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.653431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.653448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.653460 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.756195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.756277 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.756302 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.756333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.756355 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.859178 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.859258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.859286 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.859317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.859343 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.961716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.961818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.961841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.961871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:07 crc kubenswrapper[4770]: I1209 14:24:07.961890 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:07Z","lastTransitionTime":"2025-12-09T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.064630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.064698 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.064715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.064816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.064837 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.168187 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.168236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.168246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.168267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.168279 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.271010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.271060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.271070 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.271088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.271099 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.374158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.374231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.374243 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.374270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.374287 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.476463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.476524 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.476536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.476566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.476579 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.578518 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.578559 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.578570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.578583 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.578592 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.587935 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.588059 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.588111 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:08 crc kubenswrapper[4770]: E1209 14:24:08.588183 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:08 crc kubenswrapper[4770]: E1209 14:24:08.588048 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:08 crc kubenswrapper[4770]: E1209 14:24:08.588298 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.616932 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.634588 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.646209 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.658443 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.667816 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.680362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.680496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.680617 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.680733 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.680834 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.689333 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.713574 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.729421 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.747420 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.764193 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.777832 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.783842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.783891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.783935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.783961 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.784002 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.789122 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.806461 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"2025-12-09T14:23:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590\\\\n2025-12-09T14:23:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590 to /host/opt/cni/bin/\\\\n2025-12-09T14:23:11Z [verbose] multus-daemon started\\\\n2025-12-09T14:23:11Z [verbose] Readiness Indicator file check\\\\n2025-12-09T14:23:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.824573 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.840012 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.859003 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.873571 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38c408-15a5-484c-b16f-27a123fdc5ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8d48bca138e010498dc1d434a1efa683e67f2176336efa0ce20ca32f4e27c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb50c0d2ebf1a5fb054060797889366776d97a07ede4ab2c1502e8769d6d8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdb3d7b981af632cfe211bf77cec39078183e1bf00340512775e3187ba7a260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.886393 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.886457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.886478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.886503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.886521 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.890846 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:08Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.998923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.998969 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.999014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.999030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:08 crc kubenswrapper[4770]: I1209 14:24:08.999042 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:08Z","lastTransitionTime":"2025-12-09T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.110137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.110199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.110213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.110237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.110256 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.213638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.213702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.213716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.213757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.213770 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.317874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.317929 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.317941 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.317963 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.317975 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.422050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.422110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.422124 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.422149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.422167 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.524766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.524817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.524827 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.524848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.524858 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.588224 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:09 crc kubenswrapper[4770]: E1209 14:24:09.588716 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.589022 4770 scope.go:117] "RemoveContainer" containerID="cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.627169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.627222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.627234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.627256 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.627270 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.730275 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.730326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.730338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.730356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.730371 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.833496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.833535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.833545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.833594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.833608 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.936244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.936284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.936293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.936308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:09 crc kubenswrapper[4770]: I1209 14:24:09.936319 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:09Z","lastTransitionTime":"2025-12-09T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.038092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.038132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.038140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.038154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.038165 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.140677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.140772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.140785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.140804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.140816 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.242775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.242825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.242836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.242856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.242868 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.345159 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.345218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.345226 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.345241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.345251 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.448135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.448206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.448227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.448254 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.448276 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.551549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.551657 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.551678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.551700 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.551713 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.588062 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.588132 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.588315 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:10 crc kubenswrapper[4770]: E1209 14:24:10.588300 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:10 crc kubenswrapper[4770]: E1209 14:24:10.588480 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:10 crc kubenswrapper[4770]: E1209 14:24:10.588697 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.654492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.654556 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.654571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.654597 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.654616 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.758000 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.758317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.758326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.758340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.758349 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.860691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.860756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.860767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.860782 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.860791 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.962843 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.962890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.962898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.962913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:10 crc kubenswrapper[4770]: I1209 14:24:10.962922 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:10Z","lastTransitionTime":"2025-12-09T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.035931 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/2.log" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.054345 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.054774 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.070640 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.073006 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.073068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.073084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.073129 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.073147 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.095308 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.110029 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.154390 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.187335 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.189348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.189374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.189385 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.189400 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.189413 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.201044 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.214701 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.241503 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.259067 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.294394 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.294439 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.294450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.294468 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.294479 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.323307 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"2025-12-09T14:23:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590\\\\n2025-12-09T14:23:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590 to /host/opt/cni/bin/\\\\n2025-12-09T14:23:11Z [verbose] multus-daemon started\\\\n2025-12-09T14:23:11Z [verbose] Readiness Indicator file check\\\\n2025-12-09T14:23:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.347431 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.354983 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.355117 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.355162 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.355277 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.355325 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.355310634 +0000 UTC m=+147.251512770 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.355491 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.355483589 +0000 UTC m=+147.251685725 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.355524 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.355547 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.35554002 +0000 UTC m=+147.251742156 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.365058 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.379600 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.397526 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.397565 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.397576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.397591 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.397605 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.401663 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38c408-15a5-484c-b16f-27a123fdc5ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8d48bca138e010498dc1d434a1efa683e67f2176336efa0ce20ca32f4e27c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb50c0d2ebf1a5fb054060797889366776d97a07ede4ab2c1502e8769d6d8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdb3d7b981af632cfe211bf77cec39078183e1bf00340512775e3187ba7a260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.417027 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.438676 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.453718 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.455884 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.455978 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.456122 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.456137 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.456150 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.456208 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.456191604 +0000 UTC m=+147.352393730 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.456501 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.456569 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.456593 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.456704 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.456675968 +0000 UTC m=+147.352878294 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.468567 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:11Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.500590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.500649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.500660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.500681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.500692 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.587583 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:11 crc kubenswrapper[4770]: E1209 14:24:11.587712 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.604121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.604164 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.604174 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.604191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.604201 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.707327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.707408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.707421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.707441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.707456 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.810364 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.810406 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.810415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.810428 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.810437 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.912483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.912514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.912543 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.912558 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:11 crc kubenswrapper[4770]: I1209 14:24:11.912566 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:11Z","lastTransitionTime":"2025-12-09T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.014520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.014568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.014584 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.014604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.014619 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.116854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.116901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.116913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.116931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.116944 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.219324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.219363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.219371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.219383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.219392 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.322090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.322127 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.322138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.322153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.322165 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.424513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.424552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.424563 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.424576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.424585 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.526998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.527053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.527070 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.527099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.527111 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.587433 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.587613 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:12 crc kubenswrapper[4770]: E1209 14:24:12.587678 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:12 crc kubenswrapper[4770]: E1209 14:24:12.587825 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.587928 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:12 crc kubenswrapper[4770]: E1209 14:24:12.588031 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.629464 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.629508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.629516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.629531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.629544 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.731819 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.731868 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.731878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.731893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.731904 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.835211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.835260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.835269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.835285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.835297 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.938654 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.938719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.938756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.938780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:12 crc kubenswrapper[4770]: I1209 14:24:12.938792 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:12Z","lastTransitionTime":"2025-12-09T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.041850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.041924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.041941 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.041967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.041993 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.062118 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/3.log" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.063012 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/2.log" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.067946 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3" exitCode=1 Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.068023 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.068116 4770 scope.go:117] "RemoveContainer" containerID="cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.069009 4770 scope.go:117] "RemoveContainer" containerID="7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3" Dec 09 14:24:13 crc kubenswrapper[4770]: E1209 14:24:13.069199 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.089441 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"981b1471-9853-4b69-9ab9-f06555203c07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.101680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.101752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.101766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.101786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.101800 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.105557 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38c408-15a5-484c-b16f-27a123fdc5ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8d48bca138e010498dc1d434a1efa683e67f2176336efa0ce20ca32f4e27c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb50c0d2ebf1a5fb054060797889366776d97a07ede4ab2c1502e8769d6d8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdb3d7b981af632cfe211bf77cec39078183e1bf00340512775e3187ba7a260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c78624ef2bb74a5c0ddfce4894d9ff7d9c3333b3ef9dc1fa090d276f1ebfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: E1209 14:24:13.118632 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.119974 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c644db48e4577207b806df23838826814934952b7d1160a52ded85e37ebd5100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.122782 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.122844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.122854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.122868 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.122877 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.134256 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344f75c44b9999bc4700907b8f4f93a5675f95996b89339f6505ed6a659be1cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: E1209 14:24:13.135344 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.139625 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.139712 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.139746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.139764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.139776 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.145224 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sc42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f69e34-efd9-4905-9617-5e16997dbae3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63d3fe960d4f9c8bf0f6cb60bcf45fefcf4ba4335afbe2a0a1c84764aae5141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnbvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sc42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: E1209 14:24:13.152550 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.159888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.159991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.160025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.160039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.160051 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.167195 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ef2402-cd48-4b26-97cc-42dd427262cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8e987669a701e0da807dfaa7762c882d139188550d8f4e649fab3a39e01d6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12ab45402fe22f3bb128869405a6c99593420220216343e08d13fe8f422487d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae58ff88b9712578cf93047108294fed461ac383907cff7195036f8d5b956682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029a83bb3e9d47368237385215aa5a09a939495d81fe0844abb1e4ba9c239325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3f55474d5b8798e0585b6088642455dcbe11693fcd13159cdd22b6107fd242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://267596db0c598344a5430d3e1061ca3c78d1c292370cfa35fd459f41e82aac99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14569a70fc8990715f10849a824658d6e8a8eba1a01852402af624777fd66f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027cee91f0716828d4045a909f0fd33320d012916cc988d3e4d4d99ddfaa65bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:22:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: E1209 14:24:13.171032 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.174614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.174671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.174684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.174705 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.174719 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.182035 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: E1209 14:24:13.187838 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f2a625-8e35-4260-bf8e-bdf5a38e558e\\\",\\\"systemUUID\\\":\\\"139bdcda-380d-487b-979e-5b5b7a0626d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: E1209 14:24:13.187960 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.189757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.189797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.189809 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.189829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.189842 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.193459 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b4e85f-5bbb-40a6-a03a-c775e971ed85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc7j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b7jh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.206015 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e782c9d4c88a9c3e7e466a4f337018d078f42b5f6c672f3be7a1b742fd514b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.218047 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2n6xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cfd8e64-8449-4d75-98b0-b98f94026bb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7573f60785e92b85ef874c4e3612e2d99f6cd29f26d68aaad54793d4454397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kknzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2n6xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.237261 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1d646c4-f044-4dcd-91d5-44034f746659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95715d6f873862bb0e34d597d3965090e5bd055b68af0e1a46e09bc0fd553583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120092f56b344983e83b9efaca0079f63c6d48b8cd5030f93a8d947fa815d6c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a274b2409e49eb7fbbaf0c1fa31bc7c4821734f57801df9dad9801e94cc7d631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb247dfb6fa52e0f33002ced757cfb05856cfb712ee7ecd1c2f3266fc6d9c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://888da536b3a8970c4e4480f4b4911a7ee4c924f00ebaf03ddee65019ffe2d42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87282a17a118024d851c16910003574186434f595a225bfbdb1ef350960279d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea307d02ce900a4e3646a67d1641e1a4d683668f38572efc0cac8aa57c45763\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lms92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2kpd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.256290 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39aa66d3-1416-4178-a4bc-34179463fd45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cea1034ff0d37ca36b4e9fc5fc71181d2f8ad086cd3073d772e182fd5ecfb6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:39Z\\\",\\\"message\\\":\\\"penshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 14:23:39.448815 6409 factory.go:656] Stopping watch factory\\\\nI1209 14:23:39.448939 6409 ovnkube.go:599] Stopped ovnkube\\\\nI1209 14:23:39.448988 6409 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 14:23:39.449005 6409 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI1209 14:23:39.449021 6409 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.060617ms\\\\nI1209 14:23:39.449032 6409 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-target for network=default\\\\nF1209 14:23:39.449051 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:24:12Z\\\",\\\"message\\\":\\\" transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1209 14:24:11.947708 6849 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpm77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k4btz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.272909 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e80dae7-be5f-4343-925c-1b29b8cc7af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f295450348eda3385b315eb7d5631c75245507d4ac39337466f7475d9fd27961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f325816085be48bc690eb2c167a38eb1fbeb4f526a97a3cf3967f5fc80fa7e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a552ba88339c99554453d7caa457747888e3ee5cdadbaf7735df9ff462c1aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:22:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.288840 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.292431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.292454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.292465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.292483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.292493 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.303493 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.317423 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h5dw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38553c5-6cc9-435b-8c52-3262b861d1cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T14:23:56Z\\\",\\\"message\\\":\\\"2025-12-09T14:23:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590\\\\n2025-12-09T14:23:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1aa84416-1202-4c6c-af7c-c64375f5b590 to /host/opt/cni/bin/\\\\n2025-12-09T14:23:11Z [verbose] multus-daemon started\\\\n2025-12-09T14:23:11Z [verbose] Readiness Indicator file check\\\\n2025-12-09T14:23:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjt6c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h5dw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.330449 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51498c5e-9a5a-426a-aac1-0da87076675a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b791183e149a932e6bf69e11b40321c426985dd1b83fa10a30dbe1a25fffe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4kv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fbhnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.341727 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8382884e-8094-4402-aa96-1f40d0b21c24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c1c5224ccce0f997b13561f0e690274c103846797e99371dbeb764b8c95d51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9bae506bbf2d012c552598e868f944590d5aa4af49f8a2d1b221ff9689465d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:23:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfdmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:23:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m5jm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T14:24:13Z is after 2025-08-24T17:21:41Z" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.395520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.395559 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.395569 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.395589 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.395606 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.498268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.498339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.498358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.498386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.498405 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.587698 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:13 crc kubenswrapper[4770]: E1209 14:24:13.587934 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.601040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.601093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.601110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.601134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.601150 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.704949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.705031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.705060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.705091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.705114 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.807506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.807720 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.807776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.807794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.807805 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.910613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.910660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.910674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.910695 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:13 crc kubenswrapper[4770]: I1209 14:24:13.910709 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:13Z","lastTransitionTime":"2025-12-09T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.014164 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.014420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.014529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.014614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.014674 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.075033 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/3.log" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.118228 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.118284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.118295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.118311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.118321 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.220743 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.220790 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.220803 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.220820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.220832 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.323586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.323615 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.323624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.323638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.323648 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.426308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.426356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.426370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.426399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.426414 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.528638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.528696 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.528711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.528741 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.528753 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.587589 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.587614 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.587692 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:14 crc kubenswrapper[4770]: E1209 14:24:14.587762 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:14 crc kubenswrapper[4770]: E1209 14:24:14.587887 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:14 crc kubenswrapper[4770]: E1209 14:24:14.588002 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.631420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.631476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.631484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.631498 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.631506 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.733760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.733805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.733814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.733829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.733839 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.835910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.835956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.835966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.835985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.835998 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.938382 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.938426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.938435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.938453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:14 crc kubenswrapper[4770]: I1209 14:24:14.938465 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:14Z","lastTransitionTime":"2025-12-09T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.042337 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.042389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.042402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.042421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.042436 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.145609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.145664 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.145677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.145706 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.145767 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.249031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.249102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.249118 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.249140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.249155 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.352123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.352179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.352198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.352223 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.352241 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.454469 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.454529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.454545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.454589 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.454610 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.556776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.556830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.556841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.556857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.556868 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.587349 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:15 crc kubenswrapper[4770]: E1209 14:24:15.587538 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.660161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.660222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.660240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.660264 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.660284 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.763253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.763285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.763293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.763306 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.763316 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.866278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.866343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.866360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.866381 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.866422 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.968779 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.968847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.968862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.968878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:15 crc kubenswrapper[4770]: I1209 14:24:15.968893 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:15Z","lastTransitionTime":"2025-12-09T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.071666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.071712 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.071766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.071787 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.071798 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.175170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.175236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.175253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.175277 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.175296 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.278269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.278353 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.278373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.278402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.278421 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.381763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.381820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.381830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.381848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.381863 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.484339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.484396 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.484410 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.484432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.484444 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.586720 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.586814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.586825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.586842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.586853 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.588270 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:16 crc kubenswrapper[4770]: E1209 14:24:16.588451 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.588692 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:16 crc kubenswrapper[4770]: E1209 14:24:16.588852 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.589037 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:16 crc kubenswrapper[4770]: E1209 14:24:16.589223 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.689671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.689715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.689744 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.689760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.689770 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.791932 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.792042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.792064 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.792094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.792115 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.894901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.894950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.894966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.894992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.895028 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.997583 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.997635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.997646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.997677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:16 crc kubenswrapper[4770]: I1209 14:24:16.997695 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:16Z","lastTransitionTime":"2025-12-09T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.100465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.100503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.100514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.100530 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.100541 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.204774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.204825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.204836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.204858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.204872 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.307581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.307637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.307646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.307668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.307681 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.409822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.409881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.409897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.409921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.409938 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.511423 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.511466 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.511477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.511492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.511503 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.588168 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:17 crc kubenswrapper[4770]: E1209 14:24:17.588349 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.614872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.614907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.614916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.614929 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.614938 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.717115 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.717175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.717193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.717215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.717231 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.819610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.819682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.819697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.819719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.819770 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.921895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.922007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.922027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.922048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:17 crc kubenswrapper[4770]: I1209 14:24:17.922062 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:17Z","lastTransitionTime":"2025-12-09T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.024433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.024496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.024515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.024542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.024561 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.127897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.127999 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.128017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.128042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.128063 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.230807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.230892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.230918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.230952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.230978 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.334688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.334781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.334803 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.334821 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.334836 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.437571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.437630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.437648 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.437674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.437696 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.539618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.539658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.539666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.539681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.539693 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.587688 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.587976 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:18 crc kubenswrapper[4770]: E1209 14:24:18.588030 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:18 crc kubenswrapper[4770]: E1209 14:24:18.588507 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.589798 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:18 crc kubenswrapper[4770]: E1209 14:24:18.590041 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.646485 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.646557 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.646581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.646614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.646639 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.647400 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.647376988 podStartE2EDuration="36.647376988s" podCreationTimestamp="2025-12-09 14:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.646898895 +0000 UTC m=+90.543101031" watchObservedRunningTime="2025-12-09 14:24:18.647376988 +0000 UTC m=+90.543579124" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.647568 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.647564333 podStartE2EDuration="1m11.647564333s" podCreationTimestamp="2025-12-09 14:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.626862863 +0000 UTC m=+90.523064999" watchObservedRunningTime="2025-12-09 14:24:18.647564333 +0000 UTC m=+90.543766469" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.748465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.750860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.750968 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.751029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.751048 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.756105 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.756081868 podStartE2EDuration="1m10.756081868s" podCreationTimestamp="2025-12-09 14:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.753647 +0000 UTC m=+90.649849216" watchObservedRunningTime="2025-12-09 14:24:18.756081868 +0000 UTC m=+90.652284044" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.756442 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9sc42" podStartSLOduration=69.756428798 podStartE2EDuration="1m9.756428798s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.710574551 +0000 UTC m=+90.606776727" watchObservedRunningTime="2025-12-09 14:24:18.756428798 +0000 UTC m=+90.652630974" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.829443 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2n6xq" podStartSLOduration=69.829428686 podStartE2EDuration="1m9.829428686s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.829031705 +0000 UTC m=+90.725233841" watchObservedRunningTime="2025-12-09 14:24:18.829428686 +0000 UTC m=+90.725630822" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.849701 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2kpd7" podStartSLOduration=69.849675274 podStartE2EDuration="1m9.849675274s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.848659935 +0000 UTC m=+90.744862071" watchObservedRunningTime="2025-12-09 14:24:18.849675274 +0000 UTC m=+90.745877440" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.853463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.853550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.853578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.853627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.853652 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.911685 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.911666963 podStartE2EDuration="1m11.911666963s" podCreationTimestamp="2025-12-09 14:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.892007661 +0000 UTC m=+90.788209807" watchObservedRunningTime="2025-12-09 14:24:18.911666963 +0000 UTC m=+90.807869099" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.943369 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h5dw2" podStartSLOduration=69.943346032 podStartE2EDuration="1m9.943346032s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.942549999 +0000 UTC m=+90.838752135" watchObservedRunningTime="2025-12-09 14:24:18.943346032 +0000 UTC m=+90.839548188" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.956251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.956307 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.956324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.956344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.956361 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:18Z","lastTransitionTime":"2025-12-09T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:18 crc kubenswrapper[4770]: I1209 14:24:18.956446 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podStartSLOduration=69.956432949 podStartE2EDuration="1m9.956432949s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.955658197 +0000 UTC m=+90.851860343" watchObservedRunningTime="2025-12-09 14:24:18.956432949 +0000 UTC m=+90.852635105" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.058439 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.058493 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.058517 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.058532 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.058542 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.160588 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.160645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.160660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.160716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.160775 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.263252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.263340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.263365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.263400 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.263423 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.366527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.366614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.366634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.366661 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.366678 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.469166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.469227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.469241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.469261 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.469273 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.571599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.571656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.571674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.571696 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.571714 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.588318 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:19 crc kubenswrapper[4770]: E1209 14:24:19.588559 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.674640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.674781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.674847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.674881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.674904 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.778020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.778089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.778112 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.778135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.778157 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.881312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.881360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.881371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.881386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.881395 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.984692 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.984758 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.984772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.984791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:19 crc kubenswrapper[4770]: I1209 14:24:19.984805 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:19Z","lastTransitionTime":"2025-12-09T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.087610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.087871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.087955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.088038 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.088111 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.190049 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.190288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.190350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.190414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.190471 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.293436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.293516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.293537 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.293573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.293597 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.395985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.396354 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.396484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.396638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.396963 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.499802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.500041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.500113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.500234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.500335 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.588223 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.588988 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.589132 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:20 crc kubenswrapper[4770]: E1209 14:24:20.589224 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:20 crc kubenswrapper[4770]: E1209 14:24:20.589524 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:20 crc kubenswrapper[4770]: E1209 14:24:20.589690 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.602117 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m5jm2" podStartSLOduration=71.602096492 podStartE2EDuration="1m11.602096492s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:18.968680533 +0000 UTC m=+90.864882679" watchObservedRunningTime="2025-12-09 14:24:20.602096492 +0000 UTC m=+92.498298638" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.603488 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.604613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.604658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.604674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.604694 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.604710 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.707549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.707588 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.707600 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.707619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.707632 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.810962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.811019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.811031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.811051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.811351 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.913371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.913431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.913445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.913460 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:20 crc kubenswrapper[4770]: I1209 14:24:20.913472 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:20Z","lastTransitionTime":"2025-12-09T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.017705 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.017824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.017842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.017870 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.017888 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.121721 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.121825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.121849 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.121877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.121894 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.225049 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.225108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.225120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.225143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.225163 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.328481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.328567 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.328586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.328613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.328631 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.431662 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.431751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.431764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.431785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.431798 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.534901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.534979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.534995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.535008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.535017 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.587978 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:21 crc kubenswrapper[4770]: E1209 14:24:21.588095 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.637804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.637872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.637885 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.637903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.637915 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.740564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.740890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.740968 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.741067 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.741164 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.844454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.844530 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.844551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.844575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.844592 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.947088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.947126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.947137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.947150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:21 crc kubenswrapper[4770]: I1209 14:24:21.947160 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:21Z","lastTransitionTime":"2025-12-09T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.050557 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.051005 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.051206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.051436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.051643 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.154346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.154400 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.154414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.154435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.154449 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.257047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.257331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.257476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.257574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.257665 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.361789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.361881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.361904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.361930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.361951 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.465307 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.465361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.465378 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.465400 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.465417 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.568159 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.568224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.568241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.568267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.568285 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.587980 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.588060 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.588070 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:22 crc kubenswrapper[4770]: E1209 14:24:22.588188 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:22 crc kubenswrapper[4770]: E1209 14:24:22.588356 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:22 crc kubenswrapper[4770]: E1209 14:24:22.588572 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.670888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.670952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.670972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.670995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.671015 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.774769 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.774825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.774850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.774873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.774888 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.877646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.877689 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.877701 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.877717 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.877747 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.980666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.980700 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.980708 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.980737 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:22 crc kubenswrapper[4770]: I1209 14:24:22.980748 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:22Z","lastTransitionTime":"2025-12-09T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.084158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.084207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.084222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.084243 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.084259 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:23Z","lastTransitionTime":"2025-12-09T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.187537 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.187591 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.187604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.187625 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.187640 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:23Z","lastTransitionTime":"2025-12-09T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.291055 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.291096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.291107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.291125 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.291137 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:23Z","lastTransitionTime":"2025-12-09T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.352625 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.352661 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.352670 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.352684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.352694 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:24:23Z","lastTransitionTime":"2025-12-09T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.401558 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8"] Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.401989 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.405846 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.406005 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.406226 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.406505 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.432797 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.432777402 podStartE2EDuration="3.432777402s" podCreationTimestamp="2025-12-09 14:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:23.432089913 +0000 UTC m=+95.328292049" watchObservedRunningTime="2025-12-09 14:24:23.432777402 +0000 UTC m=+95.328979558" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.497214 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/187ca96f-4ab2-40fb-a832-ba0b333a7ede-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.497262 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/187ca96f-4ab2-40fb-a832-ba0b333a7ede-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.497349 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187ca96f-4ab2-40fb-a832-ba0b333a7ede-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.497469 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/187ca96f-4ab2-40fb-a832-ba0b333a7ede-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.497629 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/187ca96f-4ab2-40fb-a832-ba0b333a7ede-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.587889 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:23 crc kubenswrapper[4770]: E1209 14:24:23.588229 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.598384 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/187ca96f-4ab2-40fb-a832-ba0b333a7ede-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.598428 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/187ca96f-4ab2-40fb-a832-ba0b333a7ede-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.598451 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187ca96f-4ab2-40fb-a832-ba0b333a7ede-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.598467 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/187ca96f-4ab2-40fb-a832-ba0b333a7ede-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.598497 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/187ca96f-4ab2-40fb-a832-ba0b333a7ede-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.598522 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/187ca96f-4ab2-40fb-a832-ba0b333a7ede-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.598612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/187ca96f-4ab2-40fb-a832-ba0b333a7ede-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.599495 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/187ca96f-4ab2-40fb-a832-ba0b333a7ede-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.604826 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187ca96f-4ab2-40fb-a832-ba0b333a7ede-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.618060 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/187ca96f-4ab2-40fb-a832-ba0b333a7ede-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vwmc8\" (UID: \"187ca96f-4ab2-40fb-a832-ba0b333a7ede\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: I1209 14:24:23.721871 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" Dec 09 14:24:23 crc kubenswrapper[4770]: W1209 14:24:23.737187 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod187ca96f_4ab2_40fb_a832_ba0b333a7ede.slice/crio-dc345270cbc58daca89755cb3fa1e30aa6b7d809eac7c57c9439af49b9c31c62 WatchSource:0}: Error finding container dc345270cbc58daca89755cb3fa1e30aa6b7d809eac7c57c9439af49b9c31c62: Status 404 returned error can't find the container with id dc345270cbc58daca89755cb3fa1e30aa6b7d809eac7c57c9439af49b9c31c62 Dec 09 14:24:24 crc kubenswrapper[4770]: I1209 14:24:24.138392 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" event={"ID":"187ca96f-4ab2-40fb-a832-ba0b333a7ede","Type":"ContainerStarted","Data":"cf3f5694eda13fc340050c5493a91772aff2aceef0ffcfe9ee4a90ba97eda7f6"} Dec 09 14:24:24 crc kubenswrapper[4770]: I1209 14:24:24.138741 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" event={"ID":"187ca96f-4ab2-40fb-a832-ba0b333a7ede","Type":"ContainerStarted","Data":"dc345270cbc58daca89755cb3fa1e30aa6b7d809eac7c57c9439af49b9c31c62"} Dec 09 14:24:24 crc kubenswrapper[4770]: I1209 14:24:24.587931 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:24 crc kubenswrapper[4770]: E1209 14:24:24.588087 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:24 crc kubenswrapper[4770]: I1209 14:24:24.588157 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:24 crc kubenswrapper[4770]: I1209 14:24:24.587949 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:24 crc kubenswrapper[4770]: E1209 14:24:24.588365 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:24 crc kubenswrapper[4770]: E1209 14:24:24.588575 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:25 crc kubenswrapper[4770]: I1209 14:24:25.587946 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:25 crc kubenswrapper[4770]: E1209 14:24:25.588694 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:26 crc kubenswrapper[4770]: I1209 14:24:26.587595 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:26 crc kubenswrapper[4770]: I1209 14:24:26.587634 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:26 crc kubenswrapper[4770]: E1209 14:24:26.587763 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:26 crc kubenswrapper[4770]: E1209 14:24:26.587982 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:26 crc kubenswrapper[4770]: I1209 14:24:26.588531 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:26 crc kubenswrapper[4770]: E1209 14:24:26.588762 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:27 crc kubenswrapper[4770]: I1209 14:24:27.587586 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:27 crc kubenswrapper[4770]: E1209 14:24:27.587767 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:27 crc kubenswrapper[4770]: I1209 14:24:27.588887 4770 scope.go:117] "RemoveContainer" containerID="7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3" Dec 09 14:24:27 crc kubenswrapper[4770]: E1209 14:24:27.589244 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" Dec 09 14:24:27 crc kubenswrapper[4770]: I1209 14:24:27.626219 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vwmc8" podStartSLOduration=78.626203377 podStartE2EDuration="1m18.626203377s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:24.157160627 +0000 UTC m=+96.053362763" watchObservedRunningTime="2025-12-09 14:24:27.626203377 +0000 UTC m=+99.522405513" Dec 09 14:24:27 crc kubenswrapper[4770]: I1209 14:24:27.813845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:27 crc kubenswrapper[4770]: E1209 14:24:27.814012 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:24:27 crc kubenswrapper[4770]: E1209 14:24:27.814288 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs podName:98b4e85f-5bbb-40a6-a03a-c775e971ed85 nodeName:}" failed. No retries permitted until 2025-12-09 14:25:31.814268444 +0000 UTC m=+163.710470580 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs") pod "network-metrics-daemon-b7jh8" (UID: "98b4e85f-5bbb-40a6-a03a-c775e971ed85") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:24:28 crc kubenswrapper[4770]: I1209 14:24:28.587626 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:28 crc kubenswrapper[4770]: I1209 14:24:28.587713 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:28 crc kubenswrapper[4770]: E1209 14:24:28.589343 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:28 crc kubenswrapper[4770]: I1209 14:24:28.589358 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:28 crc kubenswrapper[4770]: E1209 14:24:28.590004 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:28 crc kubenswrapper[4770]: E1209 14:24:28.589460 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:29 crc kubenswrapper[4770]: I1209 14:24:29.587779 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:29 crc kubenswrapper[4770]: E1209 14:24:29.588431 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:30 crc kubenswrapper[4770]: I1209 14:24:30.587544 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:30 crc kubenswrapper[4770]: I1209 14:24:30.587591 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:30 crc kubenswrapper[4770]: I1209 14:24:30.587765 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:30 crc kubenswrapper[4770]: E1209 14:24:30.587870 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:30 crc kubenswrapper[4770]: E1209 14:24:30.587923 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:30 crc kubenswrapper[4770]: E1209 14:24:30.588011 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:31 crc kubenswrapper[4770]: I1209 14:24:31.588005 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:31 crc kubenswrapper[4770]: E1209 14:24:31.588934 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:32 crc kubenswrapper[4770]: I1209 14:24:32.587836 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:32 crc kubenswrapper[4770]: I1209 14:24:32.587865 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:32 crc kubenswrapper[4770]: E1209 14:24:32.588024 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:32 crc kubenswrapper[4770]: I1209 14:24:32.588108 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:32 crc kubenswrapper[4770]: E1209 14:24:32.588195 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:32 crc kubenswrapper[4770]: E1209 14:24:32.588283 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:33 crc kubenswrapper[4770]: I1209 14:24:33.587419 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:33 crc kubenswrapper[4770]: E1209 14:24:33.587657 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:34 crc kubenswrapper[4770]: I1209 14:24:34.587557 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:34 crc kubenswrapper[4770]: I1209 14:24:34.587677 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:34 crc kubenswrapper[4770]: I1209 14:24:34.587798 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:34 crc kubenswrapper[4770]: E1209 14:24:34.587721 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:34 crc kubenswrapper[4770]: E1209 14:24:34.587931 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:34 crc kubenswrapper[4770]: E1209 14:24:34.588011 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:35 crc kubenswrapper[4770]: I1209 14:24:35.587609 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:35 crc kubenswrapper[4770]: E1209 14:24:35.587802 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:36 crc kubenswrapper[4770]: I1209 14:24:36.588107 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:36 crc kubenswrapper[4770]: I1209 14:24:36.588176 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:36 crc kubenswrapper[4770]: E1209 14:24:36.588265 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:36 crc kubenswrapper[4770]: E1209 14:24:36.588517 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:36 crc kubenswrapper[4770]: I1209 14:24:36.588953 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:36 crc kubenswrapper[4770]: E1209 14:24:36.589181 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:37 crc kubenswrapper[4770]: I1209 14:24:37.587814 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:37 crc kubenswrapper[4770]: E1209 14:24:37.588209 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:38 crc kubenswrapper[4770]: I1209 14:24:38.587241 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:38 crc kubenswrapper[4770]: E1209 14:24:38.587389 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:38 crc kubenswrapper[4770]: I1209 14:24:38.587486 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:38 crc kubenswrapper[4770]: I1209 14:24:38.587548 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:38 crc kubenswrapper[4770]: E1209 14:24:38.587627 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:38 crc kubenswrapper[4770]: E1209 14:24:38.588849 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:39 crc kubenswrapper[4770]: I1209 14:24:39.587537 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:39 crc kubenswrapper[4770]: E1209 14:24:39.587846 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:40 crc kubenswrapper[4770]: I1209 14:24:40.587553 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:40 crc kubenswrapper[4770]: I1209 14:24:40.587647 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:40 crc kubenswrapper[4770]: E1209 14:24:40.587795 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:40 crc kubenswrapper[4770]: E1209 14:24:40.587938 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:40 crc kubenswrapper[4770]: I1209 14:24:40.588206 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:40 crc kubenswrapper[4770]: E1209 14:24:40.588637 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:41 crc kubenswrapper[4770]: I1209 14:24:41.587979 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:41 crc kubenswrapper[4770]: E1209 14:24:41.588197 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:42 crc kubenswrapper[4770]: I1209 14:24:42.587616 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:42 crc kubenswrapper[4770]: I1209 14:24:42.587837 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:42 crc kubenswrapper[4770]: E1209 14:24:42.587997 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:42 crc kubenswrapper[4770]: I1209 14:24:42.588109 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:42 crc kubenswrapper[4770]: E1209 14:24:42.588316 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:42 crc kubenswrapper[4770]: E1209 14:24:42.589147 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:42 crc kubenswrapper[4770]: I1209 14:24:42.589644 4770 scope.go:117] "RemoveContainer" containerID="7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3" Dec 09 14:24:42 crc kubenswrapper[4770]: E1209 14:24:42.589989 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k4btz_openshift-ovn-kubernetes(39aa66d3-1416-4178-a4bc-34179463fd45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" Dec 09 14:24:43 crc kubenswrapper[4770]: I1209 14:24:43.587511 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:43 crc kubenswrapper[4770]: E1209 14:24:43.587691 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:44 crc kubenswrapper[4770]: I1209 14:24:44.587816 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:44 crc kubenswrapper[4770]: I1209 14:24:44.587940 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:44 crc kubenswrapper[4770]: E1209 14:24:44.587957 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:44 crc kubenswrapper[4770]: E1209 14:24:44.588115 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:44 crc kubenswrapper[4770]: I1209 14:24:44.588689 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:44 crc kubenswrapper[4770]: E1209 14:24:44.588782 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:45 crc kubenswrapper[4770]: I1209 14:24:45.216440 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/1.log" Dec 09 14:24:45 crc kubenswrapper[4770]: I1209 14:24:45.217099 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/0.log" Dec 09 14:24:45 crc kubenswrapper[4770]: I1209 14:24:45.217187 4770 generic.go:334] "Generic (PLEG): container finished" podID="c38553c5-6cc9-435b-8c52-3262b861d1cf" containerID="a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5" exitCode=1 Dec 09 14:24:45 crc kubenswrapper[4770]: I1209 14:24:45.217241 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h5dw2" event={"ID":"c38553c5-6cc9-435b-8c52-3262b861d1cf","Type":"ContainerDied","Data":"a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5"} Dec 09 14:24:45 crc kubenswrapper[4770]: I1209 14:24:45.217322 4770 scope.go:117] "RemoveContainer" containerID="08e25561f67f080bc2849d3c205b83c6c6cf88475ddeb99bb2f1f02b198cb8ed" Dec 09 14:24:45 crc kubenswrapper[4770]: I1209 14:24:45.217779 4770 scope.go:117] "RemoveContainer" containerID="a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5" Dec 09 14:24:45 crc kubenswrapper[4770]: E1209 14:24:45.217985 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-h5dw2_openshift-multus(c38553c5-6cc9-435b-8c52-3262b861d1cf)\"" pod="openshift-multus/multus-h5dw2" podUID="c38553c5-6cc9-435b-8c52-3262b861d1cf" Dec 09 14:24:45 crc kubenswrapper[4770]: I1209 14:24:45.587237 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:45 crc kubenswrapper[4770]: E1209 14:24:45.587771 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:46 crc kubenswrapper[4770]: I1209 14:24:46.221716 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/1.log" Dec 09 14:24:46 crc kubenswrapper[4770]: I1209 14:24:46.587833 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:46 crc kubenswrapper[4770]: E1209 14:24:46.587986 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:46 crc kubenswrapper[4770]: I1209 14:24:46.588114 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:46 crc kubenswrapper[4770]: E1209 14:24:46.588247 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:46 crc kubenswrapper[4770]: I1209 14:24:46.588448 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:46 crc kubenswrapper[4770]: E1209 14:24:46.588769 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:47 crc kubenswrapper[4770]: I1209 14:24:47.588218 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:47 crc kubenswrapper[4770]: E1209 14:24:47.588346 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:48 crc kubenswrapper[4770]: I1209 14:24:48.587307 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:48 crc kubenswrapper[4770]: I1209 14:24:48.587349 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:48 crc kubenswrapper[4770]: E1209 14:24:48.588619 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:48 crc kubenswrapper[4770]: I1209 14:24:48.588633 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:48 crc kubenswrapper[4770]: E1209 14:24:48.589259 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:48 crc kubenswrapper[4770]: E1209 14:24:48.589342 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:48 crc kubenswrapper[4770]: E1209 14:24:48.592169 4770 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 09 14:24:48 crc kubenswrapper[4770]: E1209 14:24:48.756164 4770 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:24:49 crc kubenswrapper[4770]: I1209 14:24:49.587565 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:49 crc kubenswrapper[4770]: E1209 14:24:49.587800 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:50 crc kubenswrapper[4770]: I1209 14:24:50.588186 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:50 crc kubenswrapper[4770]: I1209 14:24:50.588230 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:50 crc kubenswrapper[4770]: I1209 14:24:50.588289 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:50 crc kubenswrapper[4770]: E1209 14:24:50.589089 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:50 crc kubenswrapper[4770]: E1209 14:24:50.589137 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:50 crc kubenswrapper[4770]: E1209 14:24:50.589253 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:51 crc kubenswrapper[4770]: I1209 14:24:51.587516 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:51 crc kubenswrapper[4770]: E1209 14:24:51.587675 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:52 crc kubenswrapper[4770]: I1209 14:24:52.587945 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:52 crc kubenswrapper[4770]: I1209 14:24:52.588039 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:52 crc kubenswrapper[4770]: I1209 14:24:52.587796 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:52 crc kubenswrapper[4770]: E1209 14:24:52.588195 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:52 crc kubenswrapper[4770]: E1209 14:24:52.588309 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:52 crc kubenswrapper[4770]: E1209 14:24:52.588396 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:53 crc kubenswrapper[4770]: I1209 14:24:53.587320 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:53 crc kubenswrapper[4770]: E1209 14:24:53.587484 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:53 crc kubenswrapper[4770]: E1209 14:24:53.758387 4770 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:24:54 crc kubenswrapper[4770]: I1209 14:24:54.588118 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:54 crc kubenswrapper[4770]: I1209 14:24:54.588299 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:54 crc kubenswrapper[4770]: E1209 14:24:54.588357 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:54 crc kubenswrapper[4770]: I1209 14:24:54.588426 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:54 crc kubenswrapper[4770]: E1209 14:24:54.588611 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:54 crc kubenswrapper[4770]: E1209 14:24:54.588804 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:55 crc kubenswrapper[4770]: I1209 14:24:55.588134 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:55 crc kubenswrapper[4770]: E1209 14:24:55.588420 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:55 crc kubenswrapper[4770]: I1209 14:24:55.589636 4770 scope.go:117] "RemoveContainer" containerID="7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3" Dec 09 14:24:56 crc kubenswrapper[4770]: I1209 14:24:56.264184 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovnkube-controller/3.log" Dec 09 14:24:56 crc kubenswrapper[4770]: I1209 14:24:56.268086 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerStarted","Data":"1ebc9ebb54c0e83d7c8040bfc05ce9ea915130a5e497e2573d722aae810b7ebb"} Dec 09 14:24:56 crc kubenswrapper[4770]: I1209 14:24:56.268519 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:24:56 crc kubenswrapper[4770]: I1209 14:24:56.589216 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:56 crc kubenswrapper[4770]: E1209 14:24:56.589324 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:56 crc kubenswrapper[4770]: I1209 14:24:56.589741 4770 scope.go:117] "RemoveContainer" containerID="a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5" Dec 09 14:24:56 crc kubenswrapper[4770]: I1209 14:24:56.589982 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:56 crc kubenswrapper[4770]: E1209 14:24:56.590041 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:56 crc kubenswrapper[4770]: I1209 14:24:56.590140 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:56 crc kubenswrapper[4770]: E1209 14:24:56.590179 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:56 crc kubenswrapper[4770]: I1209 14:24:56.622067 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podStartSLOduration=107.622045779 podStartE2EDuration="1m47.622045779s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:56.303633695 +0000 UTC m=+128.199835831" watchObservedRunningTime="2025-12-09 14:24:56.622045779 +0000 UTC m=+128.518247925" Dec 09 14:24:57 crc kubenswrapper[4770]: I1209 14:24:57.155391 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-b7jh8"] Dec 09 14:24:57 crc kubenswrapper[4770]: I1209 14:24:57.155509 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:57 crc kubenswrapper[4770]: E1209 14:24:57.155586 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:57 crc kubenswrapper[4770]: I1209 14:24:57.274228 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/1.log" Dec 09 14:24:57 crc kubenswrapper[4770]: I1209 14:24:57.275082 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h5dw2" event={"ID":"c38553c5-6cc9-435b-8c52-3262b861d1cf","Type":"ContainerStarted","Data":"56d185a3c0466cb2fca6ba8405177abc87c8aae2c2b0db2307e65712aabe4905"} Dec 09 14:24:58 crc kubenswrapper[4770]: I1209 14:24:58.588969 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:24:58 crc kubenswrapper[4770]: I1209 14:24:58.589020 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:24:58 crc kubenswrapper[4770]: I1209 14:24:58.589040 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:24:58 crc kubenswrapper[4770]: I1209 14:24:58.588968 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:24:58 crc kubenswrapper[4770]: E1209 14:24:58.589093 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:24:58 crc kubenswrapper[4770]: E1209 14:24:58.589158 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:24:58 crc kubenswrapper[4770]: E1209 14:24:58.589214 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:24:58 crc kubenswrapper[4770]: E1209 14:24:58.589257 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:24:58 crc kubenswrapper[4770]: E1209 14:24:58.759173 4770 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:25:00 crc kubenswrapper[4770]: I1209 14:25:00.587845 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:25:00 crc kubenswrapper[4770]: I1209 14:25:00.587951 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:25:00 crc kubenswrapper[4770]: I1209 14:25:00.587978 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:25:00 crc kubenswrapper[4770]: I1209 14:25:00.588119 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:25:00 crc kubenswrapper[4770]: E1209 14:25:00.589500 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:25:00 crc kubenswrapper[4770]: E1209 14:25:00.589321 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:25:00 crc kubenswrapper[4770]: E1209 14:25:00.589544 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:25:00 crc kubenswrapper[4770]: E1209 14:25:00.589334 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:25:02 crc kubenswrapper[4770]: I1209 14:25:02.587295 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:25:02 crc kubenswrapper[4770]: I1209 14:25:02.587355 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:25:02 crc kubenswrapper[4770]: E1209 14:25:02.587440 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 14:25:02 crc kubenswrapper[4770]: I1209 14:25:02.587487 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:25:02 crc kubenswrapper[4770]: E1209 14:25:02.587544 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 14:25:02 crc kubenswrapper[4770]: E1209 14:25:02.587627 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 14:25:02 crc kubenswrapper[4770]: I1209 14:25:02.587497 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:25:02 crc kubenswrapper[4770]: E1209 14:25:02.587759 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b7jh8" podUID="98b4e85f-5bbb-40a6-a03a-c775e971ed85" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.085372 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.128150 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mw288"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.128836 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.129387 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.130055 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.130286 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.130688 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.131434 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q46jb"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.132014 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.132563 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9gcs4"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.133150 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.136209 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.136935 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.137933 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-q979p"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.138558 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.141374 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.141455 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.141404 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.141407 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.142181 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.143763 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.145310 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.152914 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.153876 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.155818 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4sg97"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.157086 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.159045 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-x9m4l"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.171217 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.172755 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcdm\" (UniqueName: \"kubernetes.io/projected/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-kube-api-access-cbcdm\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.172800 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-serving-cert\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.172852 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-client-ca\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.172876 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.172921 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-config\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.173853 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174042 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174175 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174292 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174396 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174440 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174589 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174608 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174702 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174703 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.174877 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.175011 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22k76"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.175051 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.175160 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.175257 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.175378 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.175522 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.175592 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.175644 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.176136 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.176246 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.176554 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.176833 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.176603 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177072 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177161 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177273 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177412 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177489 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177573 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177420 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177793 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177873 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177925 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.177995 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.178259 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.176266 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.178652 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.179061 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.179200 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.182114 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.189004 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.189079 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.189121 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.189273 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.189589 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.189787 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.192468 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.192514 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.195330 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.201310 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.201453 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.201619 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.204913 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ztlqj"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.205880 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.206406 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dt62n"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.207117 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.207565 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tt465"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.207870 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.207980 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.208146 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.208232 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.208369 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.223228 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.223680 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.225786 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.228344 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.233881 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.233941 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.234017 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.233944 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.234320 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.234426 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.234523 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.234689 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.235558 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x2gs6"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.236186 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.236265 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.237026 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.237650 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.239033 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.239176 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.240353 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.240382 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.241320 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.241855 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.242479 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.242663 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.258475 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.258619 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.260688 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.260911 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.269884 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273435 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23303764-ac1d-4937-9eb0-1b7f6e24e29f-serving-cert\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-client-ca\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273503 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-service-ca\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273527 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfr4\" (UniqueName: \"kubernetes.io/projected/23303764-ac1d-4937-9eb0-1b7f6e24e29f-kube-api-access-5bfr4\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273549 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-config\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273569 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe586163-823a-49a4-a93e-55e0cc485b8f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273591 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257s4\" (UniqueName: \"kubernetes.io/projected/0e385a15-7fc9-4b6d-8770-89e954a5b286-kube-api-access-257s4\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273611 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273633 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6hw\" (UniqueName: \"kubernetes.io/projected/49d5b890-581d-4f7a-9811-2f011513994f-kube-api-access-fx6hw\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273658 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-client-ca\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273680 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e385a15-7fc9-4b6d-8770-89e954a5b286-serving-cert\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273700 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-config\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273719 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbxtf\" (UniqueName: \"kubernetes.io/projected/f321b740-62d9-4f6a-8aac-0faa316ede0d-kube-api-access-xbxtf\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273766 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f321b740-62d9-4f6a-8aac-0faa316ede0d-serving-cert\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273803 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqvn\" (UniqueName: \"kubernetes.io/projected/d377b023-282a-4a7f-a2fb-d944873c3bbb-kube-api-access-6bqvn\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273824 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-config\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsx4x\" (UniqueName: \"kubernetes.io/projected/a895efff-87fa-45aa-8436-f72feb6ecf83-kube-api-access-nsx4x\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-encryption-config\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273887 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62hq4\" (UniqueName: \"kubernetes.io/projected/fe586163-823a-49a4-a93e-55e0cc485b8f-kube-api-access-62hq4\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273908 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-client\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273927 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a895efff-87fa-45aa-8436-f72feb6ecf83-auth-proxy-config\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273952 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcdm\" (UniqueName: \"kubernetes.io/projected/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-kube-api-access-cbcdm\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273972 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-etcd-client\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.273992 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a895efff-87fa-45aa-8436-f72feb6ecf83-machine-approver-tls\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274013 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-etcd-client\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274051 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d377b023-282a-4a7f-a2fb-d944873c3bbb-audit-dir\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274071 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d5b890-581d-4f7a-9811-2f011513994f-serving-cert\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274099 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-service-ca-bundle\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274120 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe586163-823a-49a4-a93e-55e0cc485b8f-config\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274139 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-encryption-config\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274160 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-ca\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274182 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-config\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274202 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e385a15-7fc9-4b6d-8770-89e954a5b286-trusted-ca\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274221 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274243 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274263 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-audit\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274283 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-serving-cert\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274303 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-serving-cert\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274322 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274345 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d377b023-282a-4a7f-a2fb-d944873c3bbb-node-pullsecrets\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274364 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-image-import-ca\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274384 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-etcd-serving-ca\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274406 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a895efff-87fa-45aa-8436-f72feb6ecf83-config\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274424 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-audit-policies\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274445 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgggt\" (UniqueName: \"kubernetes.io/projected/5a7d357a-d7de-4c54-b2a2-caa7fb5f7904-kube-api-access-tgggt\") pod \"downloads-7954f5f757-q979p\" (UID: \"5a7d357a-d7de-4c54-b2a2-caa7fb5f7904\") " pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274465 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fe586163-823a-49a4-a93e-55e0cc485b8f-images\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274485 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e385a15-7fc9-4b6d-8770-89e954a5b286-config\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274507 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmg6w\" (UniqueName: \"kubernetes.io/projected/c429a77c-763f-4db9-b2e9-7090262bf700-kube-api-access-vmg6w\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274527 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-config\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274548 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-serving-cert\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.274570 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c429a77c-763f-4db9-b2e9-7090262bf700-audit-dir\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.275548 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-client-ca\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.279256 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.279443 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.279581 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.279796 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.279914 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.280016 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.280112 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.280210 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.280321 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.280410 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.280505 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.280589 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.280708 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.282210 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-config\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.283691 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.285085 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.285316 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.285459 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.285670 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.285848 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.286356 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.287816 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.288338 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w9vwd"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.288756 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.289005 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.289522 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.289605 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.289877 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.289963 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.290150 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.290574 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.293356 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.294060 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.294156 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.296203 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-serving-cert\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.300366 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.300873 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-th5m2"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.301054 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.301088 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.301287 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.302400 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.303405 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.306800 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.308380 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9n5nz"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.308786 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gw46q"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.309196 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.309765 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.309942 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.311657 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lwf4r"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.312124 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.315162 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.315333 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.315922 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sl22j"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.316516 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.316634 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.316787 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.317597 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.319461 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.320549 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.321336 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.330019 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.332364 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.333364 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mw288"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.333467 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.334281 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.341383 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9gcs4"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.353398 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.354377 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.354569 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.355361 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.356597 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q46jb"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.358217 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.362760 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.363464 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4sg97"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.364773 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.367108 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ztlqj"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.367556 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-th5m2"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.369062 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.370530 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.370958 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tt465"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.372027 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.372987 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22k76"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.373957 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9rzmw"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.374623 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.375444 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-q979p"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.377037 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.377912 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dt62n"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378505 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23303764-ac1d-4937-9eb0-1b7f6e24e29f-serving-cert\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378538 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfr4\" (UniqueName: \"kubernetes.io/projected/23303764-ac1d-4937-9eb0-1b7f6e24e29f-kube-api-access-5bfr4\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378559 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-client-ca\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378576 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-service-ca\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378597 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257s4\" (UniqueName: \"kubernetes.io/projected/0e385a15-7fc9-4b6d-8770-89e954a5b286-kube-api-access-257s4\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378615 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-config\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378631 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe586163-823a-49a4-a93e-55e0cc485b8f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378651 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378669 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6hw\" (UniqueName: \"kubernetes.io/projected/49d5b890-581d-4f7a-9811-2f011513994f-kube-api-access-fx6hw\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378684 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e385a15-7fc9-4b6d-8770-89e954a5b286-serving-cert\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378700 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-config\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378717 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbxtf\" (UniqueName: \"kubernetes.io/projected/f321b740-62d9-4f6a-8aac-0faa316ede0d-kube-api-access-xbxtf\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378825 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqvn\" (UniqueName: \"kubernetes.io/projected/d377b023-282a-4a7f-a2fb-d944873c3bbb-kube-api-access-6bqvn\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378841 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f321b740-62d9-4f6a-8aac-0faa316ede0d-serving-cert\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378867 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsx4x\" (UniqueName: \"kubernetes.io/projected/a895efff-87fa-45aa-8436-f72feb6ecf83-kube-api-access-nsx4x\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378885 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-encryption-config\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378905 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62hq4\" (UniqueName: \"kubernetes.io/projected/fe586163-823a-49a4-a93e-55e0cc485b8f-kube-api-access-62hq4\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378922 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-client\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378941 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a895efff-87fa-45aa-8436-f72feb6ecf83-auth-proxy-config\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378962 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-etcd-client\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.378987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-etcd-client\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379001 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379018 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a895efff-87fa-45aa-8436-f72feb6ecf83-machine-approver-tls\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379034 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d377b023-282a-4a7f-a2fb-d944873c3bbb-audit-dir\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379050 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d5b890-581d-4f7a-9811-2f011513994f-serving-cert\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379065 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-encryption-config\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379088 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-service-ca-bundle\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379106 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe586163-823a-49a4-a93e-55e0cc485b8f-config\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379216 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d377b023-282a-4a7f-a2fb-d944873c3bbb-audit-dir\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.379958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a895efff-87fa-45aa-8436-f72feb6ecf83-auth-proxy-config\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.380058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-config\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.380193 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.380354 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-service-ca\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.380646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.381030 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-client-ca\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.381115 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-ca\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.381749 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-config\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382155 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-config\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382195 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e385a15-7fc9-4b6d-8770-89e954a5b286-trusted-ca\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382219 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382246 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-audit\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382273 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-serving-cert\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382297 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d377b023-282a-4a7f-a2fb-d944873c3bbb-node-pullsecrets\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382392 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-image-import-ca\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382444 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-serving-cert\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382499 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-etcd-serving-ca\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382528 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a895efff-87fa-45aa-8436-f72feb6ecf83-config\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382578 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-etcd-client\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382787 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-etcd-client\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382819 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23303764-ac1d-4937-9eb0-1b7f6e24e29f-serving-cert\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382907 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gw46q"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382998 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x2gs6"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383065 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383206 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e385a15-7fc9-4b6d-8770-89e954a5b286-serving-cert\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383297 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-audit\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.382553 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-audit-policies\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383356 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fe586163-823a-49a4-a93e-55e0cc485b8f-images\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383382 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgggt\" (UniqueName: \"kubernetes.io/projected/5a7d357a-d7de-4c54-b2a2-caa7fb5f7904-kube-api-access-tgggt\") pod \"downloads-7954f5f757-q979p\" (UID: \"5a7d357a-d7de-4c54-b2a2-caa7fb5f7904\") " pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383404 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmg6w\" (UniqueName: \"kubernetes.io/projected/c429a77c-763f-4db9-b2e9-7090262bf700-kube-api-access-vmg6w\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383426 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e385a15-7fc9-4b6d-8770-89e954a5b286-config\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383446 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-config\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383469 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c429a77c-763f-4db9-b2e9-7090262bf700-audit-dir\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383536 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c429a77c-763f-4db9-b2e9-7090262bf700-audit-dir\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d377b023-282a-4a7f-a2fb-d944873c3bbb-node-pullsecrets\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.383827 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a895efff-87fa-45aa-8436-f72feb6ecf83-machine-approver-tls\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.384172 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-service-ca-bundle\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.384252 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.384367 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a895efff-87fa-45aa-8436-f72feb6ecf83-config\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.384961 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23303764-ac1d-4937-9eb0-1b7f6e24e29f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.385001 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fe586163-823a-49a4-a93e-55e0cc485b8f-images\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.385116 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c429a77c-763f-4db9-b2e9-7090262bf700-audit-policies\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.385358 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e385a15-7fc9-4b6d-8770-89e954a5b286-config\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.385418 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e385a15-7fc9-4b6d-8770-89e954a5b286-trusted-ca\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.385488 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-etcd-serving-ca\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.385545 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-ca\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.385807 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f321b740-62d9-4f6a-8aac-0faa316ede0d-config\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.385917 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-config\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.386017 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f321b740-62d9-4f6a-8aac-0faa316ede0d-etcd-client\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.386081 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d377b023-282a-4a7f-a2fb-d944873c3bbb-image-import-ca\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.386344 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe586163-823a-49a4-a93e-55e0cc485b8f-config\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.386771 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-encryption-config\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.387092 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f321b740-62d9-4f6a-8aac-0faa316ede0d-serving-cert\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.387339 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-encryption-config\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.387435 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d5b890-581d-4f7a-9811-2f011513994f-serving-cert\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.387573 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c429a77c-763f-4db9-b2e9-7090262bf700-serving-cert\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.387952 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe586163-823a-49a4-a93e-55e0cc485b8f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.388379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d377b023-282a-4a7f-a2fb-d944873c3bbb-serving-cert\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.388429 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.389916 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.391892 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-x9m4l"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.392759 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.393442 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w9vwd"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.397235 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.399123 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.400329 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.402314 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.403770 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.404910 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jcp67"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.405506 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.406642 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wdlml"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.411955 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lwf4r"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.412113 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.414811 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.415497 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.417849 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.420385 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wdlml"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.423284 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.425072 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.426668 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sl22j"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.428514 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.430069 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9rzmw"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.431066 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8vwj6"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.431847 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.432084 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.432326 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8vwj6"] Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.459202 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.471630 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.491514 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.511782 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.531033 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.551742 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.572062 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.587359 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.587398 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.587398 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.587509 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.591829 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.611649 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.634508 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.652199 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.671351 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.700184 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.712167 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.732878 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.752276 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.809443 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcdm\" (UniqueName: \"kubernetes.io/projected/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-kube-api-access-cbcdm\") pod \"controller-manager-879f6c89f-mw288\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.832877 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.854088 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.872261 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.888523 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-serving-cert\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.888653 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.888995 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889054 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889100 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8rd\" (UniqueName: \"kubernetes.io/projected/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-kube-api-access-7n8rd\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889139 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889171 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-dir\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889208 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/45443a27-b7dd-423e-a936-64caffea35bc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nb4h5\" (UID: \"45443a27-b7dd-423e-a936-64caffea35bc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889326 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-certificates\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: E1209 14:25:04.889419 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:05.389401073 +0000 UTC m=+137.285603209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889486 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/57ed857f-e806-40ad-bd78-4aecbfc24699-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889510 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/57ed857f-e806-40ad-bd78-4aecbfc24699-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889548 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-tls\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889567 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889602 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcwl\" (UniqueName: \"kubernetes.io/projected/45443a27-b7dd-423e-a936-64caffea35bc-kube-api-access-kkcwl\") pod \"cluster-samples-operator-665b6dd947-nb4h5\" (UID: \"45443a27-b7dd-423e-a936-64caffea35bc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889695 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889747 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7tp5\" (UniqueName: \"kubernetes.io/projected/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-kube-api-access-j7tp5\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889769 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-bound-sa-token\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889785 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2br6\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-kube-api-access-w2br6\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889801 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889819 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.889937 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890025 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890062 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890093 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890146 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-policies\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890178 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890217 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-trusted-ca\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890249 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890341 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890461 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.890525 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbrf2\" (UniqueName: \"kubernetes.io/projected/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-kube-api-access-hbrf2\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.892755 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.913606 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.932853 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.952248 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.972006 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.991663 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:04 crc kubenswrapper[4770]: E1209 14:25:04.991890 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:05.491870129 +0000 UTC m=+137.388072265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992236 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/57ed857f-e806-40ad-bd78-4aecbfc24699-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992290 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-default-certificate\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992289 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992334 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpfvj\" (UniqueName: \"kubernetes.io/projected/f549f5f6-e480-4574-b59e-6b581bb4ac41-kube-api-access-zpfvj\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992411 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7416f097-b90b-46d1-b02d-b08b277b687d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992445 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/452dfc2f-000a-4b05-844a-ef541824574f-serving-cert\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992505 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-tls\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992535 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992561 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3f1c07-833a-42f8-83a4-57683456d858-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s2p7g\" (UID: \"7f3f1c07-833a-42f8-83a4-57683456d858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24992d6a-277e-43cb-8d65-9ddbcfdad19a-config-volume\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992647 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrth\" (UniqueName: \"kubernetes.io/projected/7f3f1c07-833a-42f8-83a4-57683456d858-kube-api-access-bjrth\") pod \"package-server-manager-789f6589d5-s2p7g\" (UID: \"7f3f1c07-833a-42f8-83a4-57683456d858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992685 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkcwl\" (UniqueName: \"kubernetes.io/projected/45443a27-b7dd-423e-a936-64caffea35bc-kube-api-access-kkcwl\") pod \"cluster-samples-operator-665b6dd947-nb4h5\" (UID: \"45443a27-b7dd-423e-a936-64caffea35bc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992791 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gd7\" (UniqueName: \"kubernetes.io/projected/ba79bdf2-2665-4caf-8c00-20c1405c319b-kube-api-access-f4gd7\") pod \"multus-admission-controller-857f4d67dd-lwf4r\" (UID: \"ba79bdf2-2665-4caf-8c00-20c1405c319b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad3f94de-e874-4a00-81f7-2de81795621a-tmpfs\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992878 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bf390656-1b39-4f08-922b-0bc77c98a897-signing-cabundle\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.992973 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-node-bootstrap-token\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993242 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-config-volume\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993385 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-registration-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993456 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-metrics-certs\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993511 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6424ee19-eb6b-462d-948f-e9e8a16936a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993538 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa31696f-9a08-4331-b0a7-e3f396284903-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993589 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpkpk\" (UID: \"af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993616 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993623 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7tp5\" (UniqueName: \"kubernetes.io/projected/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-kube-api-access-j7tp5\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993687 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa31696f-9a08-4331-b0a7-e3f396284903-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993712 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj9ss\" (UniqueName: \"kubernetes.io/projected/08d594b0-871f-4f3f-9d64-f14f0773be76-kube-api-access-rj9ss\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-bound-sa-token\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993790 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2br6\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-kube-api-access-w2br6\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993813 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993838 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7726e7fd-9414-4440-a0f6-caeb757f7001-srv-cert\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993872 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993894 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bf390656-1b39-4f08-922b-0bc77c98a897-signing-key\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993916 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993953 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87vtr\" (UniqueName: \"kubernetes.io/projected/f0d2ce2e-1c11-4912-ba78-f4ae51537765-kube-api-access-87vtr\") pod \"migrator-59844c95c7-8htf4\" (UID: \"f0d2ce2e-1c11-4912-ba78-f4ae51537765\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993972 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4vc5\" (UniqueName: \"kubernetes.io/projected/7726e7fd-9414-4440-a0f6-caeb757f7001-kube-api-access-h4vc5\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.993996 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlwzf\" (UniqueName: \"kubernetes.io/projected/bf390656-1b39-4f08-922b-0bc77c98a897-kube-api-access-xlwzf\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994018 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgw5q\" (UniqueName: \"kubernetes.io/projected/452dfc2f-000a-4b05-844a-ef541824574f-kube-api-access-cgw5q\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994040 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-csi-data-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994097 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mngws\" (UniqueName: \"kubernetes.io/projected/7416f097-b90b-46d1-b02d-b08b277b687d-kube-api-access-mngws\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994137 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgpn\" (UniqueName: \"kubernetes.io/projected/24992d6a-277e-43cb-8d65-9ddbcfdad19a-kube-api-access-ghgpn\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994220 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9x5f\" (UniqueName: \"kubernetes.io/projected/a91ebfed-0165-4fea-b8f5-560094e2e1a0-kube-api-access-f9x5f\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-trusted-ca\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994298 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-trusted-ca-bundle\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994339 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c3aaa7b-f11e-484b-bf29-7b237e496506-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994372 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbr4d\" (UniqueName: \"kubernetes.io/projected/3c3aaa7b-f11e-484b-bf29-7b237e496506-kube-api-access-nbr4d\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994436 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994606 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/452dfc2f-000a-4b05-844a-ef541824574f-config\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994637 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7726e7fd-9414-4440-a0f6-caeb757f7001-profile-collector-cert\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994666 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-config\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994713 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e84eac50-0a7a-41d2-bfdb-4b825beef104-metrics-tls\") pod \"dns-operator-744455d44c-w9vwd\" (UID: \"e84eac50-0a7a-41d2-bfdb-4b825beef104\") " pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994762 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-socket-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994792 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93465239-f81e-4801-95d3-520b7378fd5f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994821 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftcdq\" (UniqueName: \"kubernetes.io/projected/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-kube-api-access-ftcdq\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994862 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.994894 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995029 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995118 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995163 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995225 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-dir\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995249 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-config\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995269 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rh6\" (UniqueName: \"kubernetes.io/projected/826b4024-9fd1-4457-95f4-13dfd107b12b-kube-api-access-p7rh6\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995550 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-trusted-ca\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995602 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-dir\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995855 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.996268 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.996296 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-tls\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.996970 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/57ed857f-e806-40ad-bd78-4aecbfc24699-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.998058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.995291 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-secret-volume\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999090 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdg4d\" (UniqueName: \"kubernetes.io/projected/fbd4701f-941b-4955-aa30-ead9b8b203c0-kube-api-access-qdg4d\") pod \"ingress-canary-8vwj6\" (UID: \"fbd4701f-941b-4955-aa30-ead9b8b203c0\") " pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999116 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-config\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999168 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999174 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7416f097-b90b-46d1-b02d-b08b277b687d-metrics-tls\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999247 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/57ed857f-e806-40ad-bd78-4aecbfc24699-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999286 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-service-ca\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999325 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chr76\" (UniqueName: \"kubernetes.io/projected/6424ee19-eb6b-462d-948f-e9e8a16936a8-kube-api-access-chr76\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999395 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c3aaa7b-f11e-484b-bf29-7b237e496506-proxy-tls\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999472 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a91ebfed-0165-4fea-b8f5-560094e2e1a0-srv-cert\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999512 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999525 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad3f94de-e874-4a00-81f7-2de81795621a-apiservice-cert\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999600 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpbpj\" (UniqueName: \"kubernetes.io/projected/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-kube-api-access-lpbpj\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999654 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f549f5f6-e480-4574-b59e-6b581bb4ac41-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999687 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8cqh\" (UniqueName: \"kubernetes.io/projected/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-kube-api-access-c8cqh\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:04 crc kubenswrapper[4770]: I1209 14:25:04.999738 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbmlv\" (UniqueName: \"kubernetes.io/projected/af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e-kube-api-access-sbmlv\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpkpk\" (UID: \"af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:04.999804 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbd4701f-941b-4955-aa30-ead9b8b203c0-cert\") pod \"ingress-canary-8vwj6\" (UID: \"fbd4701f-941b-4955-aa30-ead9b8b203c0\") " pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:04.999850 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:04.999888 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bccd\" (UniqueName: \"kubernetes.io/projected/ad3f94de-e874-4a00-81f7-2de81795621a-kube-api-access-8bccd\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000000 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/57ed857f-e806-40ad-bd78-4aecbfc24699-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000049 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93465239-f81e-4801-95d3-520b7378fd5f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000090 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000130 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-oauth-serving-cert\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000166 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad3f94de-e874-4a00-81f7-2de81795621a-webhook-cert\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000259 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-oauth-config\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000367 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000417 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgx95\" (UniqueName: \"kubernetes.io/projected/c1822a0a-3dcd-455f-a11c-15c6171f2068-kube-api-access-zgx95\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000455 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6hl\" (UniqueName: \"kubernetes.io/projected/e84eac50-0a7a-41d2-bfdb-4b825beef104-kube-api-access-kh6hl\") pod \"dns-operator-744455d44c-w9vwd\" (UID: \"e84eac50-0a7a-41d2-bfdb-4b825beef104\") " pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000491 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvfc\" (UniqueName: \"kubernetes.io/projected/aa31696f-9a08-4331-b0a7-e3f396284903-kube-api-access-5bvfc\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000553 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000626 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000663 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-serving-cert\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000695 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-mountpoint-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000756 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93465239-f81e-4801-95d3-520b7378fd5f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000790 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f549f5f6-e480-4574-b59e-6b581bb4ac41-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000825 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-policies\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000865 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.000986 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c3aaa7b-f11e-484b-bf29-7b237e496506-images\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001055 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001087 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001122 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbrf2\" (UniqueName: \"kubernetes.io/projected/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-kube-api-access-hbrf2\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001203 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-service-ca-bundle\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001228 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-serving-cert\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001251 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba79bdf2-2665-4caf-8c00-20c1405c319b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lwf4r\" (UID: \"ba79bdf2-2665-4caf-8c00-20c1405c319b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jprvg\" (UniqueName: \"kubernetes.io/projected/d0930499-6325-4612-9225-3ee8a11d613a-kube-api-access-jprvg\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001291 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001345 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8rd\" (UniqueName: \"kubernetes.io/projected/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-kube-api-access-7n8rd\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001390 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6424ee19-eb6b-462d-948f-e9e8a16936a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001407 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-stats-auth\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001441 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24992d6a-277e-43cb-8d65-9ddbcfdad19a-metrics-tls\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001459 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-proxy-tls\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001476 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-certs\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001493 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a91ebfed-0165-4fea-b8f5-560094e2e1a0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001516 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-plugins-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001535 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7416f097-b90b-46d1-b02d-b08b277b687d-trusted-ca\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001556 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/45443a27-b7dd-423e-a936-64caffea35bc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nb4h5\" (UID: \"45443a27-b7dd-423e-a936-64caffea35bc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001605 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-certificates\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.001992 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.002015 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-policies\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.002365 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:05.502346104 +0000 UTC m=+137.398548260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.002973 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-certificates\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.004497 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.005877 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.006457 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.006699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-serving-cert\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.007103 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/45443a27-b7dd-423e-a936-64caffea35bc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nb4h5\" (UID: \"45443a27-b7dd-423e-a936-64caffea35bc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.007297 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.009325 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.011665 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.012131 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.031915 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.051832 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.071857 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.074042 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.092760 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.113374 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.114005 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:05.613975819 +0000 UTC m=+137.510177965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.114173 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-mountpoint-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.114299 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93465239-f81e-4801-95d3-520b7378fd5f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.114403 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f549f5f6-e480-4574-b59e-6b581bb4ac41-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.114560 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-serving-cert\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.114691 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c3aaa7b-f11e-484b-bf29-7b237e496506-images\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.114869 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-service-ca-bundle\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115022 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.114244 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-mountpoint-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115180 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba79bdf2-2665-4caf-8c00-20c1405c319b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lwf4r\" (UID: \"ba79bdf2-2665-4caf-8c00-20c1405c319b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115313 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jprvg\" (UniqueName: \"kubernetes.io/projected/d0930499-6325-4612-9225-3ee8a11d613a-kube-api-access-jprvg\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115388 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115476 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6424ee19-eb6b-462d-948f-e9e8a16936a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115529 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-stats-auth\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-certs\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115683 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93465239-f81e-4801-95d3-520b7378fd5f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115721 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a91ebfed-0165-4fea-b8f5-560094e2e1a0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115819 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24992d6a-277e-43cb-8d65-9ddbcfdad19a-metrics-tls\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115863 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-proxy-tls\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115916 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-plugins-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115959 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7416f097-b90b-46d1-b02d-b08b277b687d-trusted-ca\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.116495 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6424ee19-eb6b-462d-948f-e9e8a16936a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.115922 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.115983 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:05.615958115 +0000 UTC m=+137.512160251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.116833 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.116879 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-default-certificate\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.116917 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/452dfc2f-000a-4b05-844a-ef541824574f-serving-cert\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.116975 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpfvj\" (UniqueName: \"kubernetes.io/projected/f549f5f6-e480-4574-b59e-6b581bb4ac41-kube-api-access-zpfvj\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117001 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7416f097-b90b-46d1-b02d-b08b277b687d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117052 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3f1c07-833a-42f8-83a4-57683456d858-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s2p7g\" (UID: \"7f3f1c07-833a-42f8-83a4-57683456d858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117082 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24992d6a-277e-43cb-8d65-9ddbcfdad19a-config-volume\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117079 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-plugins-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117105 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrth\" (UniqueName: \"kubernetes.io/projected/7f3f1c07-833a-42f8-83a4-57683456d858-kube-api-access-bjrth\") pod \"package-server-manager-789f6589d5-s2p7g\" (UID: \"7f3f1c07-833a-42f8-83a4-57683456d858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117404 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad3f94de-e874-4a00-81f7-2de81795621a-tmpfs\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117429 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4gd7\" (UniqueName: \"kubernetes.io/projected/ba79bdf2-2665-4caf-8c00-20c1405c319b-kube-api-access-f4gd7\") pod \"multus-admission-controller-857f4d67dd-lwf4r\" (UID: \"ba79bdf2-2665-4caf-8c00-20c1405c319b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117460 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-node-bootstrap-token\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117488 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bf390656-1b39-4f08-922b-0bc77c98a897-signing-cabundle\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117512 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-config-volume\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117530 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-registration-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117551 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-metrics-certs\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117584 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6424ee19-eb6b-462d-948f-e9e8a16936a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117604 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa31696f-9a08-4331-b0a7-e3f396284903-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117626 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpkpk\" (UID: \"af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117652 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa31696f-9a08-4331-b0a7-e3f396284903-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj9ss\" (UniqueName: \"kubernetes.io/projected/08d594b0-871f-4f3f-9d64-f14f0773be76-kube-api-access-rj9ss\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117697 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7726e7fd-9414-4440-a0f6-caeb757f7001-srv-cert\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117754 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117785 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bf390656-1b39-4f08-922b-0bc77c98a897-signing-key\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117807 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgw5q\" (UniqueName: \"kubernetes.io/projected/452dfc2f-000a-4b05-844a-ef541824574f-kube-api-access-cgw5q\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117825 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87vtr\" (UniqueName: \"kubernetes.io/projected/f0d2ce2e-1c11-4912-ba78-f4ae51537765-kube-api-access-87vtr\") pod \"migrator-59844c95c7-8htf4\" (UID: \"f0d2ce2e-1c11-4912-ba78-f4ae51537765\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vc5\" (UniqueName: \"kubernetes.io/projected/7726e7fd-9414-4440-a0f6-caeb757f7001-kube-api-access-h4vc5\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117864 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlwzf\" (UniqueName: \"kubernetes.io/projected/bf390656-1b39-4f08-922b-0bc77c98a897-kube-api-access-xlwzf\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117881 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-csi-data-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117903 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mngws\" (UniqueName: \"kubernetes.io/projected/7416f097-b90b-46d1-b02d-b08b277b687d-kube-api-access-mngws\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgpn\" (UniqueName: \"kubernetes.io/projected/24992d6a-277e-43cb-8d65-9ddbcfdad19a-kube-api-access-ghgpn\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117943 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9x5f\" (UniqueName: \"kubernetes.io/projected/a91ebfed-0165-4fea-b8f5-560094e2e1a0-kube-api-access-f9x5f\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117966 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-trusted-ca-bundle\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.117985 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c3aaa7b-f11e-484b-bf29-7b237e496506-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118009 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbr4d\" (UniqueName: \"kubernetes.io/projected/3c3aaa7b-f11e-484b-bf29-7b237e496506-kube-api-access-nbr4d\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118035 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-config\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118061 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/452dfc2f-000a-4b05-844a-ef541824574f-config\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118079 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7726e7fd-9414-4440-a0f6-caeb757f7001-profile-collector-cert\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118100 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93465239-f81e-4801-95d3-520b7378fd5f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118120 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftcdq\" (UniqueName: \"kubernetes.io/projected/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-kube-api-access-ftcdq\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118143 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e84eac50-0a7a-41d2-bfdb-4b825beef104-metrics-tls\") pod \"dns-operator-744455d44c-w9vwd\" (UID: \"e84eac50-0a7a-41d2-bfdb-4b825beef104\") " pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118164 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-socket-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118197 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118219 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118241 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7rh6\" (UniqueName: \"kubernetes.io/projected/826b4024-9fd1-4457-95f4-13dfd107b12b-kube-api-access-p7rh6\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118262 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-config\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118285 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-secret-volume\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118303 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdg4d\" (UniqueName: \"kubernetes.io/projected/fbd4701f-941b-4955-aa30-ead9b8b203c0-kube-api-access-qdg4d\") pod \"ingress-canary-8vwj6\" (UID: \"fbd4701f-941b-4955-aa30-ead9b8b203c0\") " pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118320 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-config\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118346 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7416f097-b90b-46d1-b02d-b08b277b687d-metrics-tls\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118369 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-service-ca\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118386 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chr76\" (UniqueName: \"kubernetes.io/projected/6424ee19-eb6b-462d-948f-e9e8a16936a8-kube-api-access-chr76\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118402 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c3aaa7b-f11e-484b-bf29-7b237e496506-proxy-tls\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118468 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118485 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a91ebfed-0165-4fea-b8f5-560094e2e1a0-srv-cert\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118508 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad3f94de-e874-4a00-81f7-2de81795621a-apiservice-cert\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118526 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpbpj\" (UniqueName: \"kubernetes.io/projected/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-kube-api-access-lpbpj\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118543 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f549f5f6-e480-4574-b59e-6b581bb4ac41-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118561 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8cqh\" (UniqueName: \"kubernetes.io/projected/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-kube-api-access-c8cqh\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118580 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbmlv\" (UniqueName: \"kubernetes.io/projected/af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e-kube-api-access-sbmlv\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpkpk\" (UID: \"af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbd4701f-941b-4955-aa30-ead9b8b203c0-cert\") pod \"ingress-canary-8vwj6\" (UID: \"fbd4701f-941b-4955-aa30-ead9b8b203c0\") " pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118619 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bccd\" (UniqueName: \"kubernetes.io/projected/ad3f94de-e874-4a00-81f7-2de81795621a-kube-api-access-8bccd\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93465239-f81e-4801-95d3-520b7378fd5f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118654 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118672 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-oauth-serving-cert\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118690 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad3f94de-e874-4a00-81f7-2de81795621a-webhook-cert\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118709 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-oauth-config\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118749 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgx95\" (UniqueName: \"kubernetes.io/projected/c1822a0a-3dcd-455f-a11c-15c6171f2068-kube-api-access-zgx95\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118776 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh6hl\" (UniqueName: \"kubernetes.io/projected/e84eac50-0a7a-41d2-bfdb-4b825beef104-kube-api-access-kh6hl\") pod \"dns-operator-744455d44c-w9vwd\" (UID: \"e84eac50-0a7a-41d2-bfdb-4b825beef104\") " pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.118799 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvfc\" (UniqueName: \"kubernetes.io/projected/aa31696f-9a08-4331-b0a7-e3f396284903-kube-api-access-5bvfc\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.119297 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7416f097-b90b-46d1-b02d-b08b277b687d-trusted-ca\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.119683 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad3f94de-e874-4a00-81f7-2de81795621a-tmpfs\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.119954 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-registration-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.120553 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a91ebfed-0165-4fea-b8f5-560094e2e1a0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.121232 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.121893 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa31696f-9a08-4331-b0a7-e3f396284903-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.121985 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.122417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-service-ca\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.123848 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-oauth-serving-cert\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.123864 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-trusted-ca-bundle\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.124560 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-serving-cert\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.125569 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-config\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.125593 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-csi-data-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.126028 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7416f097-b90b-46d1-b02d-b08b277b687d-metrics-tls\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.126060 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpkpk\" (UID: \"af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.126126 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/826b4024-9fd1-4457-95f4-13dfd107b12b-socket-dir\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.126472 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93465239-f81e-4801-95d3-520b7378fd5f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.126512 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7726e7fd-9414-4440-a0f6-caeb757f7001-profile-collector-cert\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.126626 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-secret-volume\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.127140 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c3aaa7b-f11e-484b-bf29-7b237e496506-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.127613 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-config\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.128777 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa31696f-9a08-4331-b0a7-e3f396284903-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.129182 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-oauth-config\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.129403 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6424ee19-eb6b-462d-948f-e9e8a16936a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.129849 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f549f5f6-e480-4574-b59e-6b581bb4ac41-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.131271 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.131549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e84eac50-0a7a-41d2-bfdb-4b825beef104-metrics-tls\") pod \"dns-operator-744455d44c-w9vwd\" (UID: \"e84eac50-0a7a-41d2-bfdb-4b825beef104\") " pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.133283 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7726e7fd-9414-4440-a0f6-caeb757f7001-srv-cert\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.152044 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.174233 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.180619 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.198572 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.204016 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.211860 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.220091 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.221064 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:05.721046564 +0000 UTC m=+137.617248700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.244781 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.246602 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f549f5f6-e480-4574-b59e-6b581bb4ac41-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.252317 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.261756 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-proxy-tls\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.272027 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.293060 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.301794 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mw288"] Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.310123 4770 request.go:700] Waited for 1.000168572s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0 Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.312215 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.322316 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.322633 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:05.822616456 +0000 UTC m=+137.718818592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.324853 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3f1c07-833a-42f8-83a4-57683456d858-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s2p7g\" (UID: \"7f3f1c07-833a-42f8-83a4-57683456d858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.331872 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.352331 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.371463 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.386068 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/452dfc2f-000a-4b05-844a-ef541824574f-serving-cert\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.392991 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.398206 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/452dfc2f-000a-4b05-844a-ef541824574f-config\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.413332 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.423276 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.424023 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:05.923998682 +0000 UTC m=+137.820200828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.432228 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.447014 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-default-certificate\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.453111 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.464398 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-metrics-certs\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.471414 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.491417 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.496540 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-service-ca-bundle\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.511673 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.521441 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-stats-auth\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.525935 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.526557 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.026529929 +0000 UTC m=+137.922732105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.532834 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.552045 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.555629 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba79bdf2-2665-4caf-8c00-20c1405c319b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lwf4r\" (UID: \"ba79bdf2-2665-4caf-8c00-20c1405c319b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.571529 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.591924 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.599460 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bf390656-1b39-4f08-922b-0bc77c98a897-signing-key\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.612152 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.626745 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.627010 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.126978678 +0000 UTC m=+138.023180854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.632805 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.651533 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.661423 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bf390656-1b39-4f08-922b-0bc77c98a897-signing-cabundle\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.671830 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.692417 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.696085 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a91ebfed-0165-4fea-b8f5-560094e2e1a0-srv-cert\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.712032 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.728415 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.729004 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.228987102 +0000 UTC m=+138.125189238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.731293 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.744583 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.750864 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.771384 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.776676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-config\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.791784 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.812772 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.830390 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.830556 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.330527222 +0000 UTC m=+138.226729378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.830709 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.831062 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.331048917 +0000 UTC m=+138.227251053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.832254 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.851667 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.856082 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad3f94de-e874-4a00-81f7-2de81795621a-webhook-cert\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.864303 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad3f94de-e874-4a00-81f7-2de81795621a-apiservice-cert\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.870966 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.881663 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-config-volume\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.891091 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.910982 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.917092 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c3aaa7b-f11e-484b-bf29-7b237e496506-images\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.930798 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.933095 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.933235 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.433209545 +0000 UTC m=+138.329411721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.933461 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:05 crc kubenswrapper[4770]: E1209 14:25:05.933906 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.433887594 +0000 UTC m=+138.330089760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.950737 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.971892 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.982995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24992d6a-277e-43cb-8d65-9ddbcfdad19a-config-volume\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:05 crc kubenswrapper[4770]: I1209 14:25:05.991166 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.011825 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.035763 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.035935 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.535909017 +0000 UTC m=+138.432111163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.036378 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.036876 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.536855254 +0000 UTC m=+138.433057490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.052299 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24992d6a-277e-43cb-8d65-9ddbcfdad19a-metrics-tls\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.053069 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c3aaa7b-f11e-484b-bf29-7b237e496506-proxy-tls\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.059169 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfr4\" (UniqueName: \"kubernetes.io/projected/23303764-ac1d-4937-9eb0-1b7f6e24e29f-kube-api-access-5bfr4\") pod \"authentication-operator-69f744f599-9gcs4\" (UID: \"23303764-ac1d-4937-9eb0-1b7f6e24e29f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.071128 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsx4x\" (UniqueName: \"kubernetes.io/projected/a895efff-87fa-45aa-8436-f72feb6ecf83-kube-api-access-nsx4x\") pod \"machine-approver-56656f9798-r5n7d\" (UID: \"a895efff-87fa-45aa-8436-f72feb6ecf83\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.091946 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbxtf\" (UniqueName: \"kubernetes.io/projected/f321b740-62d9-4f6a-8aac-0faa316ede0d-kube-api-access-xbxtf\") pod \"etcd-operator-b45778765-dt62n\" (UID: \"f321b740-62d9-4f6a-8aac-0faa316ede0d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.116217 4770 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.116917 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-certs podName:d0930499-6325-4612-9225-3ee8a11d613a nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.616892518 +0000 UTC m=+138.513094654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-certs") pod "machine-config-server-jcp67" (UID: "d0930499-6325-4612-9225-3ee8a11d613a") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.120713 4770 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.120929 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-node-bootstrap-token podName:d0930499-6325-4612-9225-3ee8a11d613a nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.620918632 +0000 UTC m=+138.517120768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-node-bootstrap-token") pod "machine-config-server-jcp67" (UID: "d0930499-6325-4612-9225-3ee8a11d613a") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.127054 4770 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.127158 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd4701f-941b-4955-aa30-ead9b8b203c0-cert podName:fbd4701f-941b-4955-aa30-ead9b8b203c0 nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.627137477 +0000 UTC m=+138.523339633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fbd4701f-941b-4955-aa30-ead9b8b203c0-cert") pod "ingress-canary-8vwj6" (UID: "fbd4701f-941b-4955-aa30-ead9b8b203c0") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.127830 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6hw\" (UniqueName: \"kubernetes.io/projected/49d5b890-581d-4f7a-9811-2f011513994f-kube-api-access-fx6hw\") pod \"route-controller-manager-6576b87f9c-492ph\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.137702 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.137906 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.6378881 +0000 UTC m=+138.534090236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.138586 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.138900 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.638891318 +0000 UTC m=+138.535093454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.141047 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.144421 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257s4\" (UniqueName: \"kubernetes.io/projected/0e385a15-7fc9-4b6d-8770-89e954a5b286-kube-api-access-257s4\") pod \"console-operator-58897d9998-tt465\" (UID: \"0e385a15-7fc9-4b6d-8770-89e954a5b286\") " pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.165234 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62hq4\" (UniqueName: \"kubernetes.io/projected/fe586163-823a-49a4-a93e-55e0cc485b8f-kube-api-access-62hq4\") pod \"machine-api-operator-5694c8668f-x9m4l\" (UID: \"fe586163-823a-49a4-a93e-55e0cc485b8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.172178 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.185585 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqvn\" (UniqueName: \"kubernetes.io/projected/d377b023-282a-4a7f-a2fb-d944873c3bbb-kube-api-access-6bqvn\") pod \"apiserver-76f77b778f-q46jb\" (UID: \"d377b023-282a-4a7f-a2fb-d944873c3bbb\") " pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.190249 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmg6w\" (UniqueName: \"kubernetes.io/projected/c429a77c-763f-4db9-b2e9-7090262bf700-kube-api-access-vmg6w\") pod \"apiserver-7bbb656c7d-884kr\" (UID: \"c429a77c-763f-4db9-b2e9-7090262bf700\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.251396 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.251937 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.252197 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.252456 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.752436126 +0000 UTC m=+138.648638262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.252584 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.253844 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.254004 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.255672 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgggt\" (UniqueName: \"kubernetes.io/projected/5a7d357a-d7de-4c54-b2a2-caa7fb5f7904-kube-api-access-tgggt\") pod \"downloads-7954f5f757-q979p\" (UID: \"5a7d357a-d7de-4c54-b2a2-caa7fb5f7904\") " pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.271990 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.293133 4770 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.296359 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.310811 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.314232 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.329412 4770 request.go:700] Waited for 1.897290461s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.329474 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.338439 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.344442 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" event={"ID":"c2c6e492-2de2-4b7c-bc62-a3396a49b56e","Type":"ContainerStarted","Data":"efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449"} Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.344501 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" event={"ID":"c2c6e492-2de2-4b7c-bc62-a3396a49b56e","Type":"ContainerStarted","Data":"e23604394d6c9b3674b789917e932c10e0134b722b8988cec03272aecfd32c8a"} Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.345040 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.353536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.353936 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.853923685 +0000 UTC m=+138.750125821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.413899 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.418099 4770 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mw288 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.418150 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" podUID="c2c6e492-2de2-4b7c-bc62-a3396a49b56e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.427203 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.428025 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.431740 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.436993 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.438054 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.455859 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.456033 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.457769 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:06.957748039 +0000 UTC m=+138.853950185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.472090 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.487009 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.492965 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.496399 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph"] Dec 09 14:25:06 crc kubenswrapper[4770]: W1209 14:25:06.507746 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49d5b890_581d_4f7a_9811_2f011513994f.slice/crio-ffddceda7715a7f980111872fa80c7aa9d66ed99c5d0cdecd77a0b5305da1951 WatchSource:0}: Error finding container ffddceda7715a7f980111872fa80c7aa9d66ed99c5d0cdecd77a0b5305da1951: Status 404 returned error can't find the container with id ffddceda7715a7f980111872fa80c7aa9d66ed99c5d0cdecd77a0b5305da1951 Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.510822 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.541016 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-x9m4l"] Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.558331 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.558952 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.058935359 +0000 UTC m=+138.955137495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: W1209 14:25:06.586473 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe586163_823a_49a4_a93e_55e0cc485b8f.slice/crio-71de1e3254a0bce5cc09d369e87c9304d2677686e38245521684b3cb7d24130e WatchSource:0}: Error finding container 71de1e3254a0bce5cc09d369e87c9304d2677686e38245521684b3cb7d24130e: Status 404 returned error can't find the container with id 71de1e3254a0bce5cc09d369e87c9304d2677686e38245521684b3cb7d24130e Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.599243 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkcwl\" (UniqueName: \"kubernetes.io/projected/45443a27-b7dd-423e-a936-64caffea35bc-kube-api-access-kkcwl\") pod \"cluster-samples-operator-665b6dd947-nb4h5\" (UID: \"45443a27-b7dd-423e-a936-64caffea35bc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.602160 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7tp5\" (UniqueName: \"kubernetes.io/projected/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-kube-api-access-j7tp5\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.607252 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2br6\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-kube-api-access-w2br6\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.631500 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-bound-sa-token\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.651564 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tt465"] Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.660438 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.660656 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-certs\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.660737 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-node-bootstrap-token\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.660906 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbd4701f-941b-4955-aa30-ead9b8b203c0-cert\") pod \"ingress-canary-8vwj6\" (UID: \"fbd4701f-941b-4955-aa30-ead9b8b203c0\") " pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.662056 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.162038134 +0000 UTC m=+139.058240270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.663916 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbd4701f-941b-4955-aa30-ead9b8b203c0-cert\") pod \"ingress-canary-8vwj6\" (UID: \"fbd4701f-941b-4955-aa30-ead9b8b203c0\") " pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.664371 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-certs\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.666577 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d0930499-6325-4612-9225-3ee8a11d613a-node-bootstrap-token\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.679075 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dt62n"] Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.681028 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7f2bcb7-c30a-4920-8781-21c53a2ea81f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-486mt\" (UID: \"b7f2bcb7-c30a-4920-8781-21c53a2ea81f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.691357 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbrf2\" (UniqueName: \"kubernetes.io/projected/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-kube-api-access-hbrf2\") pod \"oauth-openshift-558db77b4-22k76\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.691357 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8rd\" (UniqueName: \"kubernetes.io/projected/2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf-kube-api-access-7n8rd\") pod \"openshift-config-operator-7777fb866f-4sg97\" (UID: \"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:06 crc kubenswrapper[4770]: W1209 14:25:06.694296 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf321b740_62d9_4f6a_8aac_0faa316ede0d.slice/crio-e96f54a5f39966c845c033b63e113664e66c40cd9534fa08a83b1cf046dd648f WatchSource:0}: Error finding container e96f54a5f39966c845c033b63e113664e66c40cd9534fa08a83b1cf046dd648f: Status 404 returned error can't find the container with id e96f54a5f39966c845c033b63e113664e66c40cd9534fa08a83b1cf046dd648f Dec 09 14:25:06 crc kubenswrapper[4770]: W1209 14:25:06.702298 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e385a15_7fc9_4b6d_8770_89e954a5b286.slice/crio-158d6ef24bd0de8d39439fd15af216cb81338e5a3dcef80e97e30e68c5a666ce WatchSource:0}: Error finding container 158d6ef24bd0de8d39439fd15af216cb81338e5a3dcef80e97e30e68c5a666ce: Status 404 returned error can't find the container with id 158d6ef24bd0de8d39439fd15af216cb81338e5a3dcef80e97e30e68c5a666ce Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.712695 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jprvg\" (UniqueName: \"kubernetes.io/projected/d0930499-6325-4612-9225-3ee8a11d613a-kube-api-access-jprvg\") pod \"machine-config-server-jcp67\" (UID: \"d0930499-6325-4612-9225-3ee8a11d613a\") " pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.715860 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.725546 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jcp67" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.726640 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrth\" (UniqueName: \"kubernetes.io/projected/7f3f1c07-833a-42f8-83a4-57683456d858-kube-api-access-bjrth\") pod \"package-server-manager-789f6589d5-s2p7g\" (UID: \"7f3f1c07-833a-42f8-83a4-57683456d858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.746984 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9gcs4"] Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.752483 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvfc\" (UniqueName: \"kubernetes.io/projected/aa31696f-9a08-4331-b0a7-e3f396284903-kube-api-access-5bvfc\") pod \"openshift-apiserver-operator-796bbdcf4f-g9j6b\" (UID: \"aa31696f-9a08-4331-b0a7-e3f396284903\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.756048 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.765502 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.766117 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.266083914 +0000 UTC m=+139.162286050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.766290 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.783868 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4gd7\" (UniqueName: \"kubernetes.io/projected/ba79bdf2-2665-4caf-8c00-20c1405c319b-kube-api-access-f4gd7\") pod \"multus-admission-controller-857f4d67dd-lwf4r\" (UID: \"ba79bdf2-2665-4caf-8c00-20c1405c319b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" Dec 09 14:25:06 crc kubenswrapper[4770]: W1209 14:25:06.785670 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23303764_ac1d_4937_9eb0_1b7f6e24e29f.slice/crio-30d7f5b642c31ac5109d729781bdf92ffb61d178e65bfeeb29d10b617d34d1e9 WatchSource:0}: Error finding container 30d7f5b642c31ac5109d729781bdf92ffb61d178e65bfeeb29d10b617d34d1e9: Status 404 returned error can't find the container with id 30d7f5b642c31ac5109d729781bdf92ffb61d178e65bfeeb29d10b617d34d1e9 Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.788886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpfvj\" (UniqueName: \"kubernetes.io/projected/f549f5f6-e480-4574-b59e-6b581bb4ac41-kube-api-access-zpfvj\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhbrp\" (UID: \"f549f5f6-e480-4574-b59e-6b581bb4ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.806598 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7416f097-b90b-46d1-b02d-b08b277b687d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.846325 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mngws\" (UniqueName: \"kubernetes.io/projected/7416f097-b90b-46d1-b02d-b08b277b687d-kube-api-access-mngws\") pod \"ingress-operator-5b745b69d9-47k7z\" (UID: \"7416f097-b90b-46d1-b02d-b08b277b687d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.847923 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.868107 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.868686 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.368670133 +0000 UTC m=+139.264872269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.879043 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.903817 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.904686 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7rh6\" (UniqueName: \"kubernetes.io/projected/826b4024-9fd1-4457-95f4-13dfd107b12b-kube-api-access-p7rh6\") pod \"csi-hostpathplugin-wdlml\" (UID: \"826b4024-9fd1-4457-95f4-13dfd107b12b\") " pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.912938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.919452 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chr76\" (UniqueName: \"kubernetes.io/projected/6424ee19-eb6b-462d-948f-e9e8a16936a8-kube-api-access-chr76\") pod \"openshift-controller-manager-operator-756b6f6bc6-qh72r\" (UID: \"6424ee19-eb6b-462d-948f-e9e8a16936a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.932196 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9x5f\" (UniqueName: \"kubernetes.io/projected/a91ebfed-0165-4fea-b8f5-560094e2e1a0-kube-api-access-f9x5f\") pod \"olm-operator-6b444d44fb-4cvtt\" (UID: \"a91ebfed-0165-4fea-b8f5-560094e2e1a0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.933544 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc9bbab9-3f87-4e24-923b-8af770ccbbfc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b89dw\" (UID: \"cc9bbab9-3f87-4e24-923b-8af770ccbbfc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.935615 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgpn\" (UniqueName: \"kubernetes.io/projected/24992d6a-277e-43cb-8d65-9ddbcfdad19a-kube-api-access-ghgpn\") pod \"dns-default-9rzmw\" (UID: \"24992d6a-277e-43cb-8d65-9ddbcfdad19a\") " pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.941215 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.953613 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93465239-f81e-4801-95d3-520b7378fd5f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-htx5v\" (UID: \"93465239-f81e-4801-95d3-520b7378fd5f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.955488 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.969741 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.970011 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:06 crc kubenswrapper[4770]: E1209 14:25:06.970285 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.470270436 +0000 UTC m=+139.366472572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:06 crc kubenswrapper[4770]: I1209 14:25:06.978206 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.003853 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdg4d\" (UniqueName: \"kubernetes.io/projected/fbd4701f-941b-4955-aa30-ead9b8b203c0-kube-api-access-qdg4d\") pod \"ingress-canary-8vwj6\" (UID: \"fbd4701f-941b-4955-aa30-ead9b8b203c0\") " pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.018872 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.026822 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftcdq\" (UniqueName: \"kubernetes.io/projected/08a04cc9-6982-4f5d-84c9-7a9c875d5a1b-kube-api-access-ftcdq\") pod \"router-default-5444994796-9n5nz\" (UID: \"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b\") " pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.052337 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87vtr\" (UniqueName: \"kubernetes.io/projected/f0d2ce2e-1c11-4912-ba78-f4ae51537765-kube-api-access-87vtr\") pod \"migrator-59844c95c7-8htf4\" (UID: \"f0d2ce2e-1c11-4912-ba78-f4ae51537765\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.052487 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgw5q\" (UniqueName: \"kubernetes.io/projected/452dfc2f-000a-4b05-844a-ef541824574f-kube-api-access-cgw5q\") pod \"service-ca-operator-777779d784-gw46q\" (UID: \"452dfc2f-000a-4b05-844a-ef541824574f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.052793 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wdlml" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.060839 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8vwj6" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.071380 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vc5\" (UniqueName: \"kubernetes.io/projected/7726e7fd-9414-4440-a0f6-caeb757f7001-kube-api-access-h4vc5\") pod \"catalog-operator-68c6474976-rcwj8\" (UID: \"7726e7fd-9414-4440-a0f6-caeb757f7001\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.071931 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.072287 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.572272598 +0000 UTC m=+139.468474734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.097519 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlwzf\" (UniqueName: \"kubernetes.io/projected/bf390656-1b39-4f08-922b-0bc77c98a897-kube-api-access-xlwzf\") pod \"service-ca-9c57cc56f-sl22j\" (UID: \"bf390656-1b39-4f08-922b-0bc77c98a897\") " pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.100266 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q46jb"] Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.101373 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgx95\" (UniqueName: \"kubernetes.io/projected/c1822a0a-3dcd-455f-a11c-15c6171f2068-kube-api-access-zgx95\") pod \"console-f9d7485db-x2gs6\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.115528 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8cqh\" (UniqueName: \"kubernetes.io/projected/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-kube-api-access-c8cqh\") pod \"collect-profiles-29421495-8hdl7\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.116271 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-q979p"] Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.130032 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr"] Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.134598 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpbpj\" (UniqueName: \"kubernetes.io/projected/d4f85c79-a083-4299-85cc-4ca7a7cd0bae-kube-api-access-lpbpj\") pod \"machine-config-controller-84d6567774-kwgt8\" (UID: \"d4f85c79-a083-4299-85cc-4ca7a7cd0bae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.140412 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.147078 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh6hl\" (UniqueName: \"kubernetes.io/projected/e84eac50-0a7a-41d2-bfdb-4b825beef104-kube-api-access-kh6hl\") pod \"dns-operator-744455d44c-w9vwd\" (UID: \"e84eac50-0a7a-41d2-bfdb-4b825beef104\") " pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.156253 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.168811 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.173809 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.174203 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.67418368 +0000 UTC m=+139.570385816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.187368 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.200828 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.201127 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af35dcd7-1005-457a-a0a0-5f72b10b5ff8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gqqz7\" (UID: \"af35dcd7-1005-457a-a0a0-5f72b10b5ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.211061 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbmlv\" (UniqueName: \"kubernetes.io/projected/af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e-kube-api-access-sbmlv\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpkpk\" (UID: \"af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.219151 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbr4d\" (UniqueName: \"kubernetes.io/projected/3c3aaa7b-f11e-484b-bf29-7b237e496506-kube-api-access-nbr4d\") pod \"machine-config-operator-74547568cd-8z2bt\" (UID: \"3c3aaa7b-f11e-484b-bf29-7b237e496506\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.228017 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.233060 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.242057 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bccd\" (UniqueName: \"kubernetes.io/projected/ad3f94de-e874-4a00-81f7-2de81795621a-kube-api-access-8bccd\") pod \"packageserver-d55dfcdfc-k96sv\" (UID: \"ad3f94de-e874-4a00-81f7-2de81795621a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.251439 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.264070 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.271348 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5"] Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.275452 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.275720 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.775705579 +0000 UTC m=+139.671907715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.287103 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.296748 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.298365 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj9ss\" (UniqueName: \"kubernetes.io/projected/08d594b0-871f-4f3f-9d64-f14f0773be76-kube-api-access-rj9ss\") pod \"marketplace-operator-79b997595-th5m2\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.307268 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.312261 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.352237 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" event={"ID":"23303764-ac1d-4937-9eb0-1b7f6e24e29f","Type":"ContainerStarted","Data":"12545236c1705dfbbfefe010192d61167360edba8117c62d10484e619a6b8d9f"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.352872 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" event={"ID":"23303764-ac1d-4937-9eb0-1b7f6e24e29f","Type":"ContainerStarted","Data":"30d7f5b642c31ac5109d729781bdf92ffb61d178e65bfeeb29d10b617d34d1e9"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.356874 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" event={"ID":"49d5b890-581d-4f7a-9811-2f011513994f","Type":"ContainerStarted","Data":"c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.356941 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" event={"ID":"49d5b890-581d-4f7a-9811-2f011513994f","Type":"ContainerStarted","Data":"ffddceda7715a7f980111872fa80c7aa9d66ed99c5d0cdecd77a0b5305da1951"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.357140 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.374254 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" event={"ID":"a895efff-87fa-45aa-8436-f72feb6ecf83","Type":"ContainerStarted","Data":"60ff812e2ff9adeb569859fc4183ac9daa6d1bf3a6ec33eacf0c88398192b5d9"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.374326 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" event={"ID":"a895efff-87fa-45aa-8436-f72feb6ecf83","Type":"ContainerStarted","Data":"1997252dd417ec70962e860f267c579c21e602f7c3422faaa6b7bf318aa5ff49"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.375692 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" event={"ID":"d377b023-282a-4a7f-a2fb-d944873c3bbb","Type":"ContainerStarted","Data":"2867a161e0b8c4a320338383e36fbd2a7766b9cc53b5e7dd006a3eb898ea471d"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.376707 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.376998 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.876984321 +0000 UTC m=+139.773186457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.377622 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q979p" event={"ID":"5a7d357a-d7de-4c54-b2a2-caa7fb5f7904","Type":"ContainerStarted","Data":"1320d4179b2ca1993f93d6783a2a6f6598bfa0a8ea37735ae61374d1e5ad356b"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.378592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jcp67" event={"ID":"d0930499-6325-4612-9225-3ee8a11d613a","Type":"ContainerStarted","Data":"a1504fcdfdc899355e7ebb579d5fd1918c9f8ffa865af4c3078d1caedfabd708"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.378615 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jcp67" event={"ID":"d0930499-6325-4612-9225-3ee8a11d613a","Type":"ContainerStarted","Data":"14adcc2ba589cbbfcc98205e5d90ba4d54c4ceb7fed6168fe4463415848b1124"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.381919 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" event={"ID":"fe586163-823a-49a4-a93e-55e0cc485b8f","Type":"ContainerStarted","Data":"d99edbeac2356c1980150025cdf2f987982ddd4b15c13a3b05490390958345ae"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.381949 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" event={"ID":"fe586163-823a-49a4-a93e-55e0cc485b8f","Type":"ContainerStarted","Data":"181101df71010491528793465d6087e4b79b5f5304d4fc93159a8e504f72d89f"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.381962 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" event={"ID":"fe586163-823a-49a4-a93e-55e0cc485b8f","Type":"ContainerStarted","Data":"71de1e3254a0bce5cc09d369e87c9304d2677686e38245521684b3cb7d24130e"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.383905 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" event={"ID":"f321b740-62d9-4f6a-8aac-0faa316ede0d","Type":"ContainerStarted","Data":"90246ab3dbb078420093c698b2e28a1536072cf3f2c4b9f36edd8b08db1685b0"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.383932 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" event={"ID":"f321b740-62d9-4f6a-8aac-0faa316ede0d","Type":"ContainerStarted","Data":"e96f54a5f39966c845c033b63e113664e66c40cd9534fa08a83b1cf046dd648f"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.384908 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" event={"ID":"c429a77c-763f-4db9-b2e9-7090262bf700","Type":"ContainerStarted","Data":"0149b5f54de372197de9eb24a613af4609eac3ee3f1d88cdc503e7cca475412b"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.388056 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tt465" event={"ID":"0e385a15-7fc9-4b6d-8770-89e954a5b286","Type":"ContainerStarted","Data":"b64ce76a17f3ec80a4c5d458b1a8160511f34b2c1311a419b79661cc32fef3f2"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.388084 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tt465" event={"ID":"0e385a15-7fc9-4b6d-8770-89e954a5b286","Type":"ContainerStarted","Data":"158d6ef24bd0de8d39439fd15af216cb81338e5a3dcef80e97e30e68c5a666ce"} Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.388098 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.388153 4770 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mw288 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.388178 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" podUID="c2c6e492-2de2-4b7c-bc62-a3396a49b56e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.418373 4770 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-492ph container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.418463 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" podUID="49d5b890-581d-4f7a-9811-2f011513994f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.418382 4770 patch_prober.go:28] interesting pod/console-operator-58897d9998-tt465 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.418523 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tt465" podUID="0e385a15-7fc9-4b6d-8770-89e954a5b286" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.497177 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.497288 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.497505 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.997480446 +0000 UTC m=+139.893682572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.498061 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.499071 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:07.999054439 +0000 UTC m=+139.895256725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.507041 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.517415 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.598923 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.599097 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.099080787 +0000 UTC m=+139.995282923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.599642 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.600026 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.100014124 +0000 UTC m=+139.996216260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.755043 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.755231 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.255208605 +0000 UTC m=+140.151410741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.755768 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.756248 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.256205223 +0000 UTC m=+140.152407359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.861633 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.862087 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.362065055 +0000 UTC m=+140.258267191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:07 crc kubenswrapper[4770]: I1209 14:25:07.963553 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:07 crc kubenswrapper[4770]: E1209 14:25:07.963958 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.463943874 +0000 UTC m=+140.360146000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.064805 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.065075 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.565060713 +0000 UTC m=+140.461262849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.166341 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.166698 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.666687175 +0000 UTC m=+140.562889311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.267333 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.267810 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.767794913 +0000 UTC m=+140.663997049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.325145 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jcp67" podStartSLOduration=4.325127148 podStartE2EDuration="4.325127148s" podCreationTimestamp="2025-12-09 14:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:08.322906986 +0000 UTC m=+140.219109122" watchObservedRunningTime="2025-12-09 14:25:08.325127148 +0000 UTC m=+140.221329284" Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.373561 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tt465" podStartSLOduration=119.373540212 podStartE2EDuration="1m59.373540212s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:08.340643785 +0000 UTC m=+140.236845921" watchObservedRunningTime="2025-12-09 14:25:08.373540212 +0000 UTC m=+140.269742368" Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.376089 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.379561 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.8795307 +0000 UTC m=+140.775732836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.393974 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9n5nz" event={"ID":"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b","Type":"ContainerStarted","Data":"a51a8e8c7299b2d5813003cd25f13c8653433acfc85356b7b0a40f424cd43636"} Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.395278 4770 patch_prober.go:28] interesting pod/console-operator-58897d9998-tt465 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.395334 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tt465" podUID="0e385a15-7fc9-4b6d-8770-89e954a5b286" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.404101 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.474675 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" podStartSLOduration=119.47464785 podStartE2EDuration="1m59.47464785s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:08.467912909 +0000 UTC m=+140.364115055" watchObservedRunningTime="2025-12-09 14:25:08.47464785 +0000 UTC m=+140.370849986" Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.476884 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.477073 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.977056018 +0000 UTC m=+140.873258154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.477654 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.478274 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:08.978251061 +0000 UTC m=+140.874453197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.579484 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gcs4" podStartSLOduration=119.579464612 podStartE2EDuration="1m59.579464612s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:08.556551616 +0000 UTC m=+140.452753772" watchObservedRunningTime="2025-12-09 14:25:08.579464612 +0000 UTC m=+140.475666748" Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.580844 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.581552 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" podStartSLOduration=119.58154245 podStartE2EDuration="1m59.58154245s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:08.579267636 +0000 UTC m=+140.475469772" watchObservedRunningTime="2025-12-09 14:25:08.58154245 +0000 UTC m=+140.477744586" Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.582556 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.082530278 +0000 UTC m=+140.978732414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.682117 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.682737 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.18271097 +0000 UTC m=+141.078913106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.784159 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.784482 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.284464926 +0000 UTC m=+141.180667062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.784915 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.786663 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.286649988 +0000 UTC m=+141.182852124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.797824 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dt62n" podStartSLOduration=119.797808552 podStartE2EDuration="1m59.797808552s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:08.777683905 +0000 UTC m=+140.673886041" watchObservedRunningTime="2025-12-09 14:25:08.797808552 +0000 UTC m=+140.694010688" Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.891892 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.892594 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.392575871 +0000 UTC m=+141.288778007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:08 crc kubenswrapper[4770]: I1209 14:25:08.996140 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:08 crc kubenswrapper[4770]: E1209 14:25:08.996629 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.49657114 +0000 UTC m=+141.392773286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.101251 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.101967 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.601944789 +0000 UTC m=+141.498146935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.102459 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.103023 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.603005278 +0000 UTC m=+141.499207414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.193636 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-x9m4l" podStartSLOduration=120.19361813 podStartE2EDuration="2m0.19361813s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:09.192572911 +0000 UTC m=+141.088775047" watchObservedRunningTime="2025-12-09 14:25:09.19361813 +0000 UTC m=+141.089820266" Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.217549 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.218721 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.718705117 +0000 UTC m=+141.614907253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.319430 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.319832 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.819820095 +0000 UTC m=+141.716022231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.410384 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" event={"ID":"45443a27-b7dd-423e-a936-64caffea35bc","Type":"ContainerStarted","Data":"e2a48242b70d01324a829a24b2f3fb63051b4eeddee129c360867f5804180806"} Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.410428 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" event={"ID":"45443a27-b7dd-423e-a936-64caffea35bc","Type":"ContainerStarted","Data":"faa295b6698171f626b48a0a5e230ccae300538a97a62bc467b9c658913d7afe"} Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.410438 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" event={"ID":"45443a27-b7dd-423e-a936-64caffea35bc","Type":"ContainerStarted","Data":"41c6ac2e6e10e61e38e77be82229acdf352b0e916cbc1dc3d19fa9ee8ac67064"} Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.424428 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.425538 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:09.925523443 +0000 UTC m=+141.821725579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.427224 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" event={"ID":"a895efff-87fa-45aa-8436-f72feb6ecf83","Type":"ContainerStarted","Data":"476040da161270249dc352f1c2530f903ce196e2ab930239b606a5964b996520"} Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.442405 4770 generic.go:334] "Generic (PLEG): container finished" podID="c429a77c-763f-4db9-b2e9-7090262bf700" containerID="4f72ab456e359e07e70c744034e2811cf03c0115934f30ea933ef4eed8fc5082" exitCode=0 Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.443257 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" event={"ID":"c429a77c-763f-4db9-b2e9-7090262bf700","Type":"ContainerDied","Data":"4f72ab456e359e07e70c744034e2811cf03c0115934f30ea933ef4eed8fc5082"} Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.470943 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nb4h5" podStartSLOduration=121.470923631 podStartE2EDuration="2m1.470923631s" podCreationTimestamp="2025-12-09 14:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:09.451242697 +0000 UTC m=+141.347444833" watchObservedRunningTime="2025-12-09 14:25:09.470923631 +0000 UTC m=+141.367125767" Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.483043 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9n5nz" event={"ID":"08a04cc9-6982-4f5d-84c9-7a9c875d5a1b","Type":"ContainerStarted","Data":"2ee940ac2ead6833467cff026d86ee04ebe39430dd41271c5e5a1817d2537ac8"} Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.519338 4770 generic.go:334] "Generic (PLEG): container finished" podID="d377b023-282a-4a7f-a2fb-d944873c3bbb" containerID="64df405d5be3e01df00c9ecff054ae7e2b4ff6e966af42add9ada988cba3abda" exitCode=0 Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.519744 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" event={"ID":"d377b023-282a-4a7f-a2fb-d944873c3bbb","Type":"ContainerDied","Data":"64df405d5be3e01df00c9ecff054ae7e2b4ff6e966af42add9ada988cba3abda"} Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.525150 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.525412 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.025381896 +0000 UTC m=+141.921584032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.530710 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q979p" event={"ID":"5a7d357a-d7de-4c54-b2a2-caa7fb5f7904","Type":"ContainerStarted","Data":"9ee5fb35ddb4799007ec512df3327b5f79fed6b834ad7514891c3d9d37e54883"} Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.530826 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.545420 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r5n7d" podStartSLOduration=121.545392559 podStartE2EDuration="2m1.545392559s" podCreationTimestamp="2025-12-09 14:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:09.531192659 +0000 UTC m=+141.427394795" watchObservedRunningTime="2025-12-09 14:25:09.545392559 +0000 UTC m=+141.441594695" Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.575597 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.575673 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.614174 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-q979p" podStartSLOduration=120.614149965 podStartE2EDuration="2m0.614149965s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:09.600384249 +0000 UTC m=+141.496586385" watchObservedRunningTime="2025-12-09 14:25:09.614149965 +0000 UTC m=+141.510352101" Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.628129 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.630375 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.130351322 +0000 UTC m=+142.026553458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.633393 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b"] Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.634305 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4sg97"] Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.654388 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9n5nz" podStartSLOduration=120.654368219 podStartE2EDuration="2m0.654368219s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:09.649571433 +0000 UTC m=+141.545773569" watchObservedRunningTime="2025-12-09 14:25:09.654368219 +0000 UTC m=+141.550570355" Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.708748 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22k76"] Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.731418 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z"] Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.741335 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.741927 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.241912864 +0000 UTC m=+142.138115000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.842651 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.843198 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.343169146 +0000 UTC m=+142.239371282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:09 crc kubenswrapper[4770]: I1209 14:25:09.948303 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:09 crc kubenswrapper[4770]: E1209 14:25:09.949199 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.449184753 +0000 UTC m=+142.345386879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.057827 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.058419 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.558368408 +0000 UTC m=+142.454570544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.143136 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.157548 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.161353 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.161817 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.661802022 +0000 UTC m=+142.558004158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.181300 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wdlml"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.201143 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lwf4r"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.254885 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.267044 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:10 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:10 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:10 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.267117 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.276205 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8vwj6"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.278225 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.280335 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.780312429 +0000 UTC m=+142.676514565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.301527 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9rzmw"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.324663 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.324713 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.334109 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.335194 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sl22j"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.349149 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.378131 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4"] Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.383223 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.383784 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.883771874 +0000 UTC m=+142.779974010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.484793 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.486901 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:10.986858047 +0000 UTC m=+142.883060193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.536598 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" event={"ID":"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086","Type":"ContainerStarted","Data":"fb865cf88d7e4281099d4db56dcd65f4263ee929d3866a62b1e8613bd6cc1bef"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.537522 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" event={"ID":"ba79bdf2-2665-4caf-8c00-20c1405c319b","Type":"ContainerStarted","Data":"3af944a073427b86317fc0b114ccd58d1da5359d913deefd4f8f6b390ce45870"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.553038 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" event={"ID":"7416f097-b90b-46d1-b02d-b08b277b687d","Type":"ContainerStarted","Data":"40b9adff053ff13e1a0f7a974f13617577a404b9adddb44e53ca1b0d860afe51"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.553085 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" event={"ID":"7416f097-b90b-46d1-b02d-b08b277b687d","Type":"ContainerStarted","Data":"c9f2c8160463de4d9abe2131eb2ef859f9983d3f05c79eb34be0ba27dbb0986e"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.556027 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" event={"ID":"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf","Type":"ContainerStarted","Data":"6ca53b09829f5b773a2f6380ede378cee7d134e0e25ac9c7090d8ab866a84486"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.556703 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" event={"ID":"aa31696f-9a08-4331-b0a7-e3f396284903","Type":"ContainerStarted","Data":"0b7a5ca2e756542a08267dae92b4cdbc9dd794009e9a7b1d35a19dec047a87f6"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.559286 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" event={"ID":"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6","Type":"ContainerStarted","Data":"59b6d7e7f13f8481cd3040012f5a07d814b92424e42a891fd689911573c9c171"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.560116 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wdlml" event={"ID":"826b4024-9fd1-4457-95f4-13dfd107b12b","Type":"ContainerStarted","Data":"03aa339d7000f956b70281c93ef7a97920e626e758f45fc46ecb5b74c62eeff7"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.560838 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8vwj6" event={"ID":"fbd4701f-941b-4955-aa30-ead9b8b203c0","Type":"ContainerStarted","Data":"b99bf01e14bf4eaec654faeb6b8f9170e690a5c9681d8ed03279c941a7f17044"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.562381 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" event={"ID":"f549f5f6-e480-4574-b59e-6b581bb4ac41","Type":"ContainerStarted","Data":"502c9ae03e0e8fffe7dc4c19aeefe6397fbf53d52c2dc626527cf765eaef616a"} Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.563396 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.563427 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.589864 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.591915 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.091896397 +0000 UTC m=+142.988098533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.691039 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.691275 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.191242854 +0000 UTC m=+143.087444990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.691423 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.691717 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.191704727 +0000 UTC m=+143.087906863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.792865 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.793058 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.293027391 +0000 UTC m=+143.189229517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.793453 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.793952 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.293935737 +0000 UTC m=+143.190137873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.895119 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.895414 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.395364224 +0000 UTC m=+143.291566530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:10 crc kubenswrapper[4770]: I1209 14:25:10.996677 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:10 crc kubenswrapper[4770]: E1209 14:25:10.997060 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.497043948 +0000 UTC m=+143.393246084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.098026 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:11 crc kubenswrapper[4770]: E1209 14:25:11.098238 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.598208947 +0000 UTC m=+143.494411083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.098291 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:11 crc kubenswrapper[4770]: E1209 14:25:11.098608 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:11.598594368 +0000 UTC m=+143.494796504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.199649 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.255425 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:11 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:11 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:11 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.255481 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:11 crc kubenswrapper[4770]: E1209 14:25:11.670098 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.170067355 +0000 UTC m=+144.066269501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:11 crc kubenswrapper[4770]: W1209 14:25:11.679173 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf35dcd7_1005_457a_a0a0_5f72b10b5ff8.slice/crio-39fd1a9bda603c4dc4bc3f2b085169a864addadc34e2aeaf3c97d8b0397f3bd6 WatchSource:0}: Error finding container 39fd1a9bda603c4dc4bc3f2b085169a864addadc34e2aeaf3c97d8b0397f3bd6: Status 404 returned error can't find the container with id 39fd1a9bda603c4dc4bc3f2b085169a864addadc34e2aeaf3c97d8b0397f3bd6 Dec 09 14:25:11 crc kubenswrapper[4770]: W1209 14:25:11.683591 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7f2bcb7_c30a_4920_8781_21c53a2ea81f.slice/crio-6c2dbe74fd9f1b890125c2d45ed4c188c97c6c92fbde403fc104b983ec19667e WatchSource:0}: Error finding container 6c2dbe74fd9f1b890125c2d45ed4c188c97c6c92fbde403fc104b983ec19667e: Status 404 returned error can't find the container with id 6c2dbe74fd9f1b890125c2d45ed4c188c97c6c92fbde403fc104b983ec19667e Dec 09 14:25:11 crc kubenswrapper[4770]: W1209 14:25:11.686079 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf390656_1b39_4f08_922b_0bc77c98a897.slice/crio-752e5cac2eb846b2211d08cf0c43c2ae16fb80db31394da6457fba5c96c49d42 WatchSource:0}: Error finding container 752e5cac2eb846b2211d08cf0c43c2ae16fb80db31394da6457fba5c96c49d42: Status 404 returned error can't find the container with id 752e5cac2eb846b2211d08cf0c43c2ae16fb80db31394da6457fba5c96c49d42 Dec 09 14:25:11 crc kubenswrapper[4770]: W1209 14:25:11.687308 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda91ebfed_0165_4fea_b8f5_560094e2e1a0.slice/crio-4ff3814fbec67af41e3fd5328dfc86fa2e2d3d20f645e7daff58047a4230226f WatchSource:0}: Error finding container 4ff3814fbec67af41e3fd5328dfc86fa2e2d3d20f645e7daff58047a4230226f: Status 404 returned error can't find the container with id 4ff3814fbec67af41e3fd5328dfc86fa2e2d3d20f645e7daff58047a4230226f Dec 09 14:25:11 crc kubenswrapper[4770]: W1209 14:25:11.688996 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0d2ce2e_1c11_4912_ba78_f4ae51537765.slice/crio-859c58f9c707988fa9a7481c3806546d30609c98c77a6ee5ed7e726a768f8e92 WatchSource:0}: Error finding container 859c58f9c707988fa9a7481c3806546d30609c98c77a6ee5ed7e726a768f8e92: Status 404 returned error can't find the container with id 859c58f9c707988fa9a7481c3806546d30609c98c77a6ee5ed7e726a768f8e92 Dec 09 14:25:11 crc kubenswrapper[4770]: W1209 14:25:11.692031 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24992d6a_277e_43cb_8d65_9ddbcfdad19a.slice/crio-a7c22b4e102ef9bd557d888c03478535aca652af75f02874fa5348a9f949b97a WatchSource:0}: Error finding container a7c22b4e102ef9bd557d888c03478535aca652af75f02874fa5348a9f949b97a: Status 404 returned error can't find the container with id a7c22b4e102ef9bd557d888c03478535aca652af75f02874fa5348a9f949b97a Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.706749 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:11 crc kubenswrapper[4770]: E1209 14:25:11.707144 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.207124339 +0000 UTC m=+144.103326475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.707268 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.735573 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.739824 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.741029 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x2gs6"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.744748 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.746556 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.746579 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-th5m2"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.760577 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.765897 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w9vwd"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.767391 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.769407 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.773160 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gw46q"] Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.808052 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:11 crc kubenswrapper[4770]: E1209 14:25:11.808251 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.308220837 +0000 UTC m=+144.204422963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.808721 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:11 crc kubenswrapper[4770]: E1209 14:25:11.809471 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.309460771 +0000 UTC m=+144.205662917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:11 crc kubenswrapper[4770]: I1209 14:25:11.910492 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:11 crc kubenswrapper[4770]: E1209 14:25:11.910979 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.41095489 +0000 UTC m=+144.307157026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.012017 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.012388 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.512370877 +0000 UTC m=+144.408573013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.113285 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.113542 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.613507706 +0000 UTC m=+144.509709842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.113673 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.114065 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.614055471 +0000 UTC m=+144.510257607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.214236 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.214639 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.714622794 +0000 UTC m=+144.610824930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.255922 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:12 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:12 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:12 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.255980 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.315837 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.316181 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.816163214 +0000 UTC m=+144.712365370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.417527 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.417812 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.917780836 +0000 UTC m=+144.813982982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.421503 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.422413 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:12.922386576 +0000 UTC m=+144.818588752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: W1209 14:25:12.450344 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf84b1fb_1cf8_467d_b4ab_c2a37bcefe0e.slice/crio-7d47f15119081227f3041c705dbfc80bd1a7099f8f71bb5ff40c41f8cd1c3a29 WatchSource:0}: Error finding container 7d47f15119081227f3041c705dbfc80bd1a7099f8f71bb5ff40c41f8cd1c3a29: Status 404 returned error can't find the container with id 7d47f15119081227f3041c705dbfc80bd1a7099f8f71bb5ff40c41f8cd1c3a29 Dec 09 14:25:12 crc kubenswrapper[4770]: W1209 14:25:12.478410 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c3aaa7b_f11e_484b_bf29_7b237e496506.slice/crio-9a7e9f91050e9e5c88752787f03605cf82b9f560fa036e084d1bba8cb7381a3e WatchSource:0}: Error finding container 9a7e9f91050e9e5c88752787f03605cf82b9f560fa036e084d1bba8cb7381a3e: Status 404 returned error can't find the container with id 9a7e9f91050e9e5c88752787f03605cf82b9f560fa036e084d1bba8cb7381a3e Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.523328 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.523638 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.023615857 +0000 UTC m=+144.919817993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.574990 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" event={"ID":"cc9bbab9-3f87-4e24-923b-8af770ccbbfc","Type":"ContainerStarted","Data":"323d5e44da413e0449d3fd678aa4855b9acc0c42b300efea4ff0c82e91cbb0e4"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.582538 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" event={"ID":"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf","Type":"ContainerStarted","Data":"a8aa10fdce8c884273a9ff43970d90f1fdf31081980e940d4ac7520b20371273"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.584301 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" event={"ID":"3c3aaa7b-f11e-484b-bf29-7b237e496506","Type":"ContainerStarted","Data":"9a7e9f91050e9e5c88752787f03605cf82b9f560fa036e084d1bba8cb7381a3e"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625366 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" event={"ID":"c429a77c-763f-4db9-b2e9-7090262bf700","Type":"ContainerStarted","Data":"5bd683707306fde5b096460cc40782ecd10e3d1fb037cbc4aefc54c0ccaf3573"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625417 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" event={"ID":"d377b023-282a-4a7f-a2fb-d944873c3bbb","Type":"ContainerStarted","Data":"33f87f7915f8ad7c91266f2ad933eaaf51282a65377ad377886a4e2231ea9751"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625428 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" event={"ID":"7f3f1c07-833a-42f8-83a4-57683456d858","Type":"ContainerStarted","Data":"f3293b701e5c252cfe6f3712ca133c75807ccf9776d07ac55626ef2456d48e66"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625438 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" event={"ID":"e84eac50-0a7a-41d2-bfdb-4b825beef104","Type":"ContainerStarted","Data":"e83e64090d4b79e2c6565f8d1b6fae497b9700f11d5fa480baaacd884b77a82f"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625451 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" event={"ID":"af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e","Type":"ContainerStarted","Data":"7d47f15119081227f3041c705dbfc80bd1a7099f8f71bb5ff40c41f8cd1c3a29"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625461 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" event={"ID":"bf390656-1b39-4f08-922b-0bc77c98a897","Type":"ContainerStarted","Data":"752e5cac2eb846b2211d08cf0c43c2ae16fb80db31394da6457fba5c96c49d42"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625472 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x2gs6" event={"ID":"c1822a0a-3dcd-455f-a11c-15c6171f2068","Type":"ContainerStarted","Data":"0d29c691627de94b79e48ba9db9a1fef7b1686dbec7cc21aae4fab2393152cb3"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625481 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" event={"ID":"7726e7fd-9414-4440-a0f6-caeb757f7001","Type":"ContainerStarted","Data":"4085447aa12fa0d3f10b8637a92028e1a736c301c4d955542da11a753e9f6c26"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625496 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" event={"ID":"f549f5f6-e480-4574-b59e-6b581bb4ac41","Type":"ContainerStarted","Data":"cdf48b9f2673e93a59a9040a04bcbe09dcd45a0f95e9de382dc7be52b1e175ac"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625508 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" event={"ID":"08d594b0-871f-4f3f-9d64-f14f0773be76","Type":"ContainerStarted","Data":"561c046d32b3fe6469715b5c3c106baa21398dbcc401e9b885cf543e9f71b9c0"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.625519 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" event={"ID":"6424ee19-eb6b-462d-948f-e9e8a16936a8","Type":"ContainerStarted","Data":"8796a604e293aff1d40ab98aae86170a69b67e00ea113599a09be760adeaa4e7"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.626070 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.626499 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.126456454 +0000 UTC m=+145.022658590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.630748 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" event={"ID":"aa31696f-9a08-4331-b0a7-e3f396284903","Type":"ContainerStarted","Data":"62a9d2a957ea41247747a802ecac46cc5dc5bb163a925ba500d1748f9587bd4a"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.633684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" event={"ID":"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6","Type":"ContainerStarted","Data":"95713e9c091a713b8e7c617848ee374f89105d75043bf593c49b219764c22d97"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.635101 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" event={"ID":"d4f85c79-a083-4299-85cc-4ca7a7cd0bae","Type":"ContainerStarted","Data":"40cf7aef1a13affb0e75222a3afd0992d7d617b82b6c61ecd950bb6c8e551897"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.636585 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" event={"ID":"452dfc2f-000a-4b05-844a-ef541824574f","Type":"ContainerStarted","Data":"650fd8cd704242083e27fce51a63d3e54020631e4acdd4fffca3ddce4ab51589"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.637522 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" event={"ID":"ad3f94de-e874-4a00-81f7-2de81795621a","Type":"ContainerStarted","Data":"36f823a1bc965364bf744a57488fa0a5160bfe0c5b5ba8075bbc825daffe3200"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.638675 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" event={"ID":"b7f2bcb7-c30a-4920-8781-21c53a2ea81f","Type":"ContainerStarted","Data":"e4c81c0b210a9d9aa6486662c38d277bd32576c13dd61be1cee7131d4d260cb0"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.638709 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" event={"ID":"b7f2bcb7-c30a-4920-8781-21c53a2ea81f","Type":"ContainerStarted","Data":"6c2dbe74fd9f1b890125c2d45ed4c188c97c6c92fbde403fc104b983ec19667e"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.639654 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9rzmw" event={"ID":"24992d6a-277e-43cb-8d65-9ddbcfdad19a","Type":"ContainerStarted","Data":"a7c22b4e102ef9bd557d888c03478535aca652af75f02874fa5348a9f949b97a"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.641774 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" event={"ID":"7416f097-b90b-46d1-b02d-b08b277b687d","Type":"ContainerStarted","Data":"9d9f5b9e11ae20a2a14c2310851885445bbbbba92f7c2d5299cbd81f2d08ea13"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.642911 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" event={"ID":"a91ebfed-0165-4fea-b8f5-560094e2e1a0","Type":"ContainerStarted","Data":"4ff3814fbec67af41e3fd5328dfc86fa2e2d3d20f645e7daff58047a4230226f"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.643832 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" event={"ID":"af35dcd7-1005-457a-a0a0-5f72b10b5ff8","Type":"ContainerStarted","Data":"39fd1a9bda603c4dc4bc3f2b085169a864addadc34e2aeaf3c97d8b0397f3bd6"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.644755 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" event={"ID":"f0d2ce2e-1c11-4912-ba78-f4ae51537765","Type":"ContainerStarted","Data":"859c58f9c707988fa9a7481c3806546d30609c98c77a6ee5ed7e726a768f8e92"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.645699 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" event={"ID":"93465239-f81e-4801-95d3-520b7378fd5f","Type":"ContainerStarted","Data":"51dfd5d16af7b43788c3fa0e78e5a89ce1e9e3ae7321edf5d212e0e02d2044d9"} Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.734253 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.734391 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.234366363 +0000 UTC m=+145.130568509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.734785 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.735190 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.235179887 +0000 UTC m=+145.131382023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.839514 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.839877 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.339845034 +0000 UTC m=+145.236047170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.840022 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.840514 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.340496332 +0000 UTC m=+145.236698468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.941562 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.941830 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.441799936 +0000 UTC m=+145.338002082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:12 crc kubenswrapper[4770]: I1209 14:25:12.942203 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:12 crc kubenswrapper[4770]: E1209 14:25:12.942701 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.442683871 +0000 UTC m=+145.338886007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.042860 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.043021 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.542993106 +0000 UTC m=+145.439195242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.043375 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.044045 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.544030166 +0000 UTC m=+145.440232302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.144745 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.144932 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.644900537 +0000 UTC m=+145.541102683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.145575 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.145933 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.645924435 +0000 UTC m=+145.542126571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.248058 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.248334 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.74831017 +0000 UTC m=+145.644512306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.249839 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.250298 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.750276344 +0000 UTC m=+145.646478640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.261002 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:13 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:13 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:13 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.261067 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.351174 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.352888 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.852864034 +0000 UTC m=+145.749066170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.455587 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.456160 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:13.956131583 +0000 UTC m=+145.852333709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.559090 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.559503 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.059462484 +0000 UTC m=+145.955664770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.563470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.564169 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.064147436 +0000 UTC m=+145.960349642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.668927 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.669343 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.169321868 +0000 UTC m=+146.065524004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.770922 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.771548 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.271526057 +0000 UTC m=+146.167728193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.858031 4770 generic.go:334] "Generic (PLEG): container finished" podID="2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf" containerID="a8aa10fdce8c884273a9ff43970d90f1fdf31081980e940d4ac7520b20371273" exitCode=0 Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.858150 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" event={"ID":"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf","Type":"ContainerDied","Data":"a8aa10fdce8c884273a9ff43970d90f1fdf31081980e940d4ac7520b20371273"} Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.872097 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.872845 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.37281755 +0000 UTC m=+146.269019686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.874991 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" event={"ID":"3c3aaa7b-f11e-484b-bf29-7b237e496506","Type":"ContainerStarted","Data":"915ec9c2892dd34e914b367c03b2c0883a3e5c71e2d1c39f0c092aee87affbe8"} Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.877569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" event={"ID":"f0d2ce2e-1c11-4912-ba78-f4ae51537765","Type":"ContainerStarted","Data":"0100c32bfc7aea60ff9748169d6d43dd0ab5c08154ba9ab11be34ec8520e6e88"} Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.884708 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" event={"ID":"7f3f1c07-833a-42f8-83a4-57683456d858","Type":"ContainerStarted","Data":"82da9022c6f745f91fcf6ed2cf648f1971f13f51440f1c197921aff39f6b997f"} Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.891157 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x2gs6" event={"ID":"c1822a0a-3dcd-455f-a11c-15c6171f2068","Type":"ContainerStarted","Data":"bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7"} Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.930364 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8vwj6" event={"ID":"fbd4701f-941b-4955-aa30-ead9b8b203c0","Type":"ContainerStarted","Data":"c6631c9ae1e80a005b1f1697e317d5e582304a93a2d0f0f94d3f24ed9ae95d1a"} Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.941139 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" event={"ID":"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086","Type":"ContainerStarted","Data":"430ea0cf72b2ab03a618fadf887b3a95a3522f4d82b2e7252511d1c7ed91db98"} Dec 09 14:25:13 crc kubenswrapper[4770]: I1209 14:25:13.978771 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:13 crc kubenswrapper[4770]: E1209 14:25:13.979749 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.479710731 +0000 UTC m=+146.375913057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.032264 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" event={"ID":"d4f85c79-a083-4299-85cc-4ca7a7cd0bae","Type":"ContainerStarted","Data":"b82ecffeeb67828b4169b89ce33f1cf31daeaeede825ea64a48f47551deb3205"} Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.034444 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x2gs6" podStartSLOduration=125.034423002 podStartE2EDuration="2m5.034423002s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.032784176 +0000 UTC m=+145.928986312" watchObservedRunningTime="2025-12-09 14:25:14.034423002 +0000 UTC m=+145.930625138" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.061465 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" event={"ID":"a91ebfed-0165-4fea-b8f5-560094e2e1a0","Type":"ContainerStarted","Data":"087983cfa7412a0c9622fb1924836fddf4e97141ae6360543c5d880fa64a25f0"} Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.062540 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.071372 4770 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4cvtt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.071436 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" podUID="a91ebfed-0165-4fea-b8f5-560094e2e1a0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.082006 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.083323 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.583301359 +0000 UTC m=+146.479503555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.083490 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" event={"ID":"ba79bdf2-2665-4caf-8c00-20c1405c319b","Type":"ContainerStarted","Data":"1973ba5ee58a1984f6458db0888f892fe2f2195e46acb220ddbfd90f762c03e8"} Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.103876 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" event={"ID":"bf390656-1b39-4f08-922b-0bc77c98a897","Type":"ContainerStarted","Data":"72f46155b23a8ba71aa178079bd3ba6a5302111e24928aceade6d17f77f1ace0"} Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.103919 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.118485 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.125570 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" podStartSLOduration=125.125552838 podStartE2EDuration="2m5.125552838s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.068391689 +0000 UTC m=+145.964593815" watchObservedRunningTime="2025-12-09 14:25:14.125552838 +0000 UTC m=+146.021754974" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.126698 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" podStartSLOduration=125.1266915 podStartE2EDuration="2m5.1266915s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.124465588 +0000 UTC m=+146.020667724" watchObservedRunningTime="2025-12-09 14:25:14.1266915 +0000 UTC m=+146.022893636" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.164626 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8vwj6" podStartSLOduration=10.164604919 podStartE2EDuration="10.164604919s" podCreationTimestamp="2025-12-09 14:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.163768505 +0000 UTC m=+146.059970641" watchObservedRunningTime="2025-12-09 14:25:14.164604919 +0000 UTC m=+146.060807055" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.194310 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.194855 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.694836861 +0000 UTC m=+146.591038997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.258304 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-sl22j" podStartSLOduration=125.258263716 podStartE2EDuration="2m5.258263716s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.240031273 +0000 UTC m=+146.136233409" watchObservedRunningTime="2025-12-09 14:25:14.258263716 +0000 UTC m=+146.154465862" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.263035 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.263095 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.275145 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:14 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:14 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:14 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.275240 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.295700 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.297952 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.797932523 +0000 UTC m=+146.694134659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.337404 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" podStartSLOduration=125.337387166 podStartE2EDuration="2m5.337387166s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.295005651 +0000 UTC m=+146.191207787" watchObservedRunningTime="2025-12-09 14:25:14.337387166 +0000 UTC m=+146.233589302" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.337921 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" podStartSLOduration=125.33791578 podStartE2EDuration="2m5.33791578s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.331709566 +0000 UTC m=+146.227911702" watchObservedRunningTime="2025-12-09 14:25:14.33791578 +0000 UTC m=+146.234117926" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.360047 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-486mt" podStartSLOduration=125.360030254 podStartE2EDuration="2m5.360030254s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.358967873 +0000 UTC m=+146.255170009" watchObservedRunningTime="2025-12-09 14:25:14.360030254 +0000 UTC m=+146.256232390" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.414616 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-47k7z" podStartSLOduration=125.41458728 podStartE2EDuration="2m5.41458728s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.412678456 +0000 UTC m=+146.308880592" watchObservedRunningTime="2025-12-09 14:25:14.41458728 +0000 UTC m=+146.310789406" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.417223 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.417665 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:14.917645556 +0000 UTC m=+146.813847692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.471031 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhbrp" podStartSLOduration=125.471005199 podStartE2EDuration="2m5.471005199s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.468372624 +0000 UTC m=+146.364574760" watchObservedRunningTime="2025-12-09 14:25:14.471005199 +0000 UTC m=+146.367207335" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.505055 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" podStartSLOduration=125.505035337 podStartE2EDuration="2m5.505035337s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.50405701 +0000 UTC m=+146.400271916" watchObservedRunningTime="2025-12-09 14:25:14.505035337 +0000 UTC m=+146.401237463" Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.517700 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.518055 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.018024473 +0000 UTC m=+146.914226619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.518361 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.519115 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.019101893 +0000 UTC m=+146.915304019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.635714 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.635931 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.135891993 +0000 UTC m=+147.032094139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.636321 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.636965 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.136948693 +0000 UTC m=+147.033150829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.737149 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.737584 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.237568807 +0000 UTC m=+147.133770943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:14 crc kubenswrapper[4770]: I1209 14:25:14.844475 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:14 crc kubenswrapper[4770]: E1209 14:25:14.844919 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.344901651 +0000 UTC m=+147.241103787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:14.958329 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:14.958997 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.458978663 +0000 UTC m=+147.355180799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.060041 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.060648 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.560617587 +0000 UTC m=+147.456819723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.097117 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.138265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" event={"ID":"af35dcd7-1005-457a-a0a0-5f72b10b5ff8","Type":"ContainerStarted","Data":"df4bac0a993aedb44af10bd8f0fda4402ea615e2e4afc76749a115bb00b11f0d"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.150298 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" event={"ID":"ba79bdf2-2665-4caf-8c00-20c1405c319b","Type":"ContainerStarted","Data":"1719e7861366e6260e1fc4b8cbac28a990fb497d1f5f77be8ad1c0a0b66f427b"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.163309 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.163933 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.663902856 +0000 UTC m=+147.560104992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.172419 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g9j6b" podStartSLOduration=126.172401306 podStartE2EDuration="2m6.172401306s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:14.533852869 +0000 UTC m=+146.430055015" watchObservedRunningTime="2025-12-09 14:25:15.172401306 +0000 UTC m=+147.068603442" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.183069 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" event={"ID":"af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e","Type":"ContainerStarted","Data":"6e012dd3d0d1eeda5b52eb66510c9f6b9d3b5a560860a87013e7c8b8cea68f74"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.212545 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lwf4r" podStartSLOduration=126.212514455 podStartE2EDuration="2m6.212514455s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.207047861 +0000 UTC m=+147.103249987" watchObservedRunningTime="2025-12-09 14:25:15.212514455 +0000 UTC m=+147.108716591" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.223864 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" event={"ID":"cc9bbab9-3f87-4e24-923b-8af770ccbbfc","Type":"ContainerStarted","Data":"372ca6363fc28ecc5f4b2d312e7d1fcec4ee689b55a90dca42bbc59a179a6fb9"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.252200 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9rzmw" event={"ID":"24992d6a-277e-43cb-8d65-9ddbcfdad19a","Type":"ContainerStarted","Data":"4f06e55488aa7c7727ac2aa84dbae6c9c63793b97f85fac83ed5b7fbbc4e007d"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.270159 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.272175 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.772160265 +0000 UTC m=+147.668362401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.281920 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:15 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:15 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:15 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.281986 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.284597 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gqqz7" podStartSLOduration=126.284580455 podStartE2EDuration="2m6.284580455s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.281696094 +0000 UTC m=+147.177898230" watchObservedRunningTime="2025-12-09 14:25:15.284580455 +0000 UTC m=+147.180782591" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.307440 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" event={"ID":"f0d2ce2e-1c11-4912-ba78-f4ae51537765","Type":"ContainerStarted","Data":"ea3cea26b7668f58921211f7bbc5ffcc8ccc19db6fad76024ea7f99e188f0a83"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.323230 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b89dw" podStartSLOduration=126.323206773 podStartE2EDuration="2m6.323206773s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.322111092 +0000 UTC m=+147.218313238" watchObservedRunningTime="2025-12-09 14:25:15.323206773 +0000 UTC m=+147.219408939" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.372053 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.372518 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.372572 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.374216 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.874186589 +0000 UTC m=+147.770388735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.378497 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gw46q" event={"ID":"452dfc2f-000a-4b05-844a-ef541824574f","Type":"ContainerStarted","Data":"3c79ac9697ecc77d25e03b38fef14aca52412a77666f1dc5354564ba058420ab"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.379993 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.386019 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" event={"ID":"6424ee19-eb6b-462d-948f-e9e8a16936a8","Type":"ContainerStarted","Data":"d7409f2b01367028798df91dd4fc1b313a60663c07d8320375483c785678f0a2"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.390343 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.410381 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpkpk" podStartSLOduration=126.410343268 podStartE2EDuration="2m6.410343268s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.389062068 +0000 UTC m=+147.285264204" watchObservedRunningTime="2025-12-09 14:25:15.410343268 +0000 UTC m=+147.306545404" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.411130 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" event={"ID":"93465239-f81e-4801-95d3-520b7378fd5f","Type":"ContainerStarted","Data":"f8dd50d6f662d504aaf2319f858156a4688a0c709cc808bb8728293651e64edc"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.421094 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" event={"ID":"7726e7fd-9414-4440-a0f6-caeb757f7001","Type":"ContainerStarted","Data":"37c3656d4e8efbfb215ac3c2e0f2cfcec9313dd3cb50c1f3332d4efb34fe6137"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.423137 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.449990 4770 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-rcwj8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.450048 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" podUID="7726e7fd-9414-4440-a0f6-caeb757f7001" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.459880 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" event={"ID":"d4f85c79-a083-4299-85cc-4ca7a7cd0bae","Type":"ContainerStarted","Data":"677e0c551536c93234b1492aab55be76b3f52e27295c3dd2a4c2b4a3332a5c92"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.478783 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.478950 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.479065 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.484409 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:15.984388333 +0000 UTC m=+147.880590469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.486126 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.522702 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.523188 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.536223 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qh72r" podStartSLOduration=126.536204162 podStartE2EDuration="2m6.536204162s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.535343138 +0000 UTC m=+147.431545274" watchObservedRunningTime="2025-12-09 14:25:15.536204162 +0000 UTC m=+147.432406308" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.537523 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8htf4" podStartSLOduration=126.537516569 podStartE2EDuration="2m6.537516569s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.450476078 +0000 UTC m=+147.346678214" watchObservedRunningTime="2025-12-09 14:25:15.537516569 +0000 UTC m=+147.433718705" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.538118 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wdlml" event={"ID":"826b4024-9fd1-4457-95f4-13dfd107b12b","Type":"ContainerStarted","Data":"4baf71f679d898659dc192f940b78515a9542a0a828e71224a8f210fc5060df2"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.582666 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.587541 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.087520088 +0000 UTC m=+147.983722224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.606678 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" event={"ID":"e84eac50-0a7a-41d2-bfdb-4b825beef104","Type":"ContainerStarted","Data":"6db2906eaa51837f520145ed05d4db3cf7fabdc795e6a65cb293a090e41ab002"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.655776 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" podStartSLOduration=126.655754999 podStartE2EDuration="2m6.655754999s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.606926094 +0000 UTC m=+147.503128220" watchObservedRunningTime="2025-12-09 14:25:15.655754999 +0000 UTC m=+147.551957135" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.672857 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" event={"ID":"d377b023-282a-4a7f-a2fb-d944873c3bbb","Type":"ContainerStarted","Data":"71ccc3e5996081275d4929d0f6ebad1cbd7ce6cc00f08da45ba70dc6d62faa7d"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.698040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.698309 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.198296777 +0000 UTC m=+148.094498913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.699641 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-htx5v" podStartSLOduration=126.699630705 podStartE2EDuration="2m6.699630705s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.656093429 +0000 UTC m=+147.552295565" watchObservedRunningTime="2025-12-09 14:25:15.699630705 +0000 UTC m=+147.595832841" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.699989 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kwgt8" podStartSLOduration=126.699985765 podStartE2EDuration="2m6.699985765s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.697836805 +0000 UTC m=+147.594038941" watchObservedRunningTime="2025-12-09 14:25:15.699985765 +0000 UTC m=+147.596187891" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.731813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" event={"ID":"ad3f94de-e874-4a00-81f7-2de81795621a","Type":"ContainerStarted","Data":"0ce5dfd31eff7262fbd70d44b5e4e2983eb913cee93b71afe08c8f87797d394b"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.733324 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.753109 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k96sv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.753205 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" podUID="ad3f94de-e874-4a00-81f7-2de81795621a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.784503 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" podStartSLOduration=126.784485955 podStartE2EDuration="2m6.784485955s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.783294582 +0000 UTC m=+147.679496718" watchObservedRunningTime="2025-12-09 14:25:15.784485955 +0000 UTC m=+147.680688091" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.785236 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" event={"ID":"08d594b0-871f-4f3f-9d64-f14f0773be76","Type":"ContainerStarted","Data":"61dd46d8dc8e4106225f5dbf2568106c36247fc2e46766866246d4f184ae031e"} Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.785878 4770 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4cvtt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.785912 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" podUID="a91ebfed-0165-4fea-b8f5-560094e2e1a0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.787591 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.795849 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.819511 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.819972 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.319937814 +0000 UTC m=+148.216139950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.821128 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.823767 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.3237121 +0000 UTC m=+148.219914396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.923894 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:15 crc kubenswrapper[4770]: E1209 14:25:15.924435 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.424412177 +0000 UTC m=+148.320614303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.925468 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" podStartSLOduration=126.925443156 podStartE2EDuration="2m6.925443156s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.923526872 +0000 UTC m=+147.819729008" watchObservedRunningTime="2025-12-09 14:25:15.925443156 +0000 UTC m=+147.821645292" Dec 09 14:25:15 crc kubenswrapper[4770]: I1209 14:25:15.927279 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" podStartSLOduration=126.927268027 podStartE2EDuration="2m6.927268027s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:15.872628519 +0000 UTC m=+147.768830655" watchObservedRunningTime="2025-12-09 14:25:15.927268027 +0000 UTC m=+147.823470173" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.027991 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.028531 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.528514119 +0000 UTC m=+148.424716255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.134554 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.135041 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.635007008 +0000 UTC m=+148.531209144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.135360 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.135909 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.635887044 +0000 UTC m=+148.532089180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.237161 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.237481 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.737447055 +0000 UTC m=+148.633649191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.256058 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:16 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:16 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:16 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.256141 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.265876 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tt465" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.315620 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.316356 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.339203 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.342352 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.842336288 +0000 UTC m=+148.738538424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.425028 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.425077 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.425122 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.425152 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.440406 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.440982 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.940939235 +0000 UTC m=+148.837141381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.441104 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.441612 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:16.941602425 +0000 UTC m=+148.837804561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.488548 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.488619 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.510657 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.547893 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.548079 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.048037333 +0000 UTC m=+148.944239469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.551188 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.551608 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.051594363 +0000 UTC m=+148.947796509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.655478 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.655842 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.155825099 +0000 UTC m=+149.052027235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.759219 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.760148 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.260126526 +0000 UTC m=+149.156328662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.862291 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.862711 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.362692366 +0000 UTC m=+149.258894492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.886118 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" event={"ID":"7f3f1c07-833a-42f8-83a4-57683456d858","Type":"ContainerStarted","Data":"271ed1f4810fa951929f80d64e249db474c252e667c255586672da7a385fba09"} Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.887673 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.919688 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9rzmw" event={"ID":"24992d6a-277e-43cb-8d65-9ddbcfdad19a","Type":"ContainerStarted","Data":"1689e024b5bc2edea51e421fd31277136c92d7a117a31c272df143b1fc2d5b8f"} Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.920620 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.937633 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" event={"ID":"2747fef1-ffa0-4b8f-80c8-1ca2096dfbdf","Type":"ContainerStarted","Data":"94c356410b35c08a2e135e7f6ccc8b4ddbad1cfa7dabe529775b8afd92bec7c8"} Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.937709 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.963691 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" event={"ID":"3c3aaa7b-f11e-484b-bf29-7b237e496506","Type":"ContainerStarted","Data":"ac426fd4b6df126aa10459f17fec8cdccff6dc72eb5599ae011b4c9e336763d2"} Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.965978 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:16 crc kubenswrapper[4770]: E1209 14:25:16.967784 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.467765375 +0000 UTC m=+149.363967701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:16 crc kubenswrapper[4770]: I1209 14:25:16.986273 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" event={"ID":"e84eac50-0a7a-41d2-bfdb-4b825beef104","Type":"ContainerStarted","Data":"8d15260b703ceac3646e003e00526833d188ee295b0cb7fb87e30e0611e3840c"} Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.012409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a3e90bd0c1fa9912e4f16872746fd3bcd80f3cb5fdc16960164aac626d5cedbf"} Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.014855 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-th5m2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.014930 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.016031 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.016102 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4cvtt" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.016714 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k96sv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.016776 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" podUID="ad3f94de-e874-4a00-81f7-2de81795621a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.037177 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-884kr" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.039500 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9rzmw" podStartSLOduration=13.039486986 podStartE2EDuration="13.039486986s" podCreationTimestamp="2025-12-09 14:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:17.039184827 +0000 UTC m=+148.935386973" watchObservedRunningTime="2025-12-09 14:25:17.039486986 +0000 UTC m=+148.935689112" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.041217 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" podStartSLOduration=128.041210155 podStartE2EDuration="2m8.041210155s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:16.980436473 +0000 UTC m=+148.876638609" watchObservedRunningTime="2025-12-09 14:25:17.041210155 +0000 UTC m=+148.937412291" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.072462 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.075509 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.57547067 +0000 UTC m=+149.471672966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: W1209 14:25:17.083916 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c7020fba57efb450f7acce1443013c06bda055ed159918f164392e2b899e349e WatchSource:0}: Error finding container c7020fba57efb450f7acce1443013c06bda055ed159918f164392e2b899e349e: Status 404 returned error can't find the container with id c7020fba57efb450f7acce1443013c06bda055ed159918f164392e2b899e349e Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.093389 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" podStartSLOduration=128.093361043 podStartE2EDuration="2m8.093361043s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:17.088475916 +0000 UTC m=+148.984678072" watchObservedRunningTime="2025-12-09 14:25:17.093361043 +0000 UTC m=+148.989563179" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.113671 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcwj8" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.157704 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.158250 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.167000 4770 patch_prober.go:28] interesting pod/console-f9d7485db-x2gs6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.167105 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x2gs6" podUID="c1822a0a-3dcd-455f-a11c-15c6171f2068" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.175003 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.176912 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.676892626 +0000 UTC m=+149.573094762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.266616 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.283443 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.283605 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:17 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:17 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:17 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.283701 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.285626 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.785603089 +0000 UTC m=+149.681805225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.356165 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8z2bt" podStartSLOduration=128.356125145 podStartE2EDuration="2m8.356125145s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:17.259067911 +0000 UTC m=+149.155270057" watchObservedRunningTime="2025-12-09 14:25:17.356125145 +0000 UTC m=+149.252327281" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.368329 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-w9vwd" podStartSLOduration=128.368301877 podStartE2EDuration="2m8.368301877s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:17.36733304 +0000 UTC m=+149.263535176" watchObservedRunningTime="2025-12-09 14:25:17.368301877 +0000 UTC m=+149.264504013" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.388086 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.388999 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.88898155 +0000 UTC m=+149.785183686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.494059 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.494888 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:17.994866463 +0000 UTC m=+149.891068599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.521524 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-th5m2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.521615 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.521667 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-th5m2 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.521773 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.597112 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.597637 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.097613496 +0000 UTC m=+149.993815632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.698878 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.700102 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.200051012 +0000 UTC m=+150.096253308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.801121 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.801573 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.301552911 +0000 UTC m=+150.197755047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.916290 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.916527 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.416492958 +0000 UTC m=+150.312695104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:17 crc kubenswrapper[4770]: I1209 14:25:17.916923 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:17 crc kubenswrapper[4770]: E1209 14:25:17.917420 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.417408824 +0000 UTC m=+150.313610970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.023228 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.023530 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.523495412 +0000 UTC m=+150.419697558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.023595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.024082 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.524065699 +0000 UTC m=+150.420267835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.056416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5ce75b7942a1dbd09af89e37a43957b8c6e354f716d3284fbcc2c640bf557c32"} Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.070273 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d1ae6c63ec2f3c247ff14b38282908d899d359d704280f0be7dd612e051404de"} Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.070346 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c7020fba57efb450f7acce1443013c06bda055ed159918f164392e2b899e349e"} Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.077804 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wdlml" event={"ID":"826b4024-9fd1-4457-95f4-13dfd107b12b","Type":"ContainerStarted","Data":"c70669c3d6701e7ff6b997510d0de7e64544e14c2635ab6783cad7e639ec2f66"} Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.088640 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b7adaa9d4888fa83490d013010fbef631837618e2322bb75117034519a23e10a"} Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.088707 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3c16448cc505d964b6c714774131641627a28035c8d2467dc8c7f8a66f7e154d"} Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.089053 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.092840 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-th5m2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.092879 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.134421 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.136487 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.636464885 +0000 UTC m=+150.532667021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.239918 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.240501 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.740478394 +0000 UTC m=+150.636680530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.258453 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:18 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:18 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:18 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.258500 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.299057 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k96sv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.299169 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" podUID="ad3f94de-e874-4a00-81f7-2de81795621a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.299278 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k96sv container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.299368 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" podUID="ad3f94de-e874-4a00-81f7-2de81795621a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.341939 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.342434 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.842390854 +0000 UTC m=+150.738592980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.443953 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.444625 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:18.944604734 +0000 UTC m=+150.840806870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.545955 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.546626 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:19.046577396 +0000 UTC m=+150.942779532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.648081 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.648483 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:19.148466746 +0000 UTC m=+151.044668882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.749486 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.750163 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:19.25012544 +0000 UTC m=+151.146327576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.799287 4770 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.863832 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.864249 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:19.364235283 +0000 UTC m=+151.260437419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.927535 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4sg97" Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.965107 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.965285 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:19.46526249 +0000 UTC m=+151.361464626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.965343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:18 crc kubenswrapper[4770]: E1209 14:25:18.965670 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:25:19.465662021 +0000 UTC m=+151.361864157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ztlqj" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:18 crc kubenswrapper[4770]: I1209 14:25:18.974982 4770 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T14:25:18.799317745Z","Handler":null,"Name":""} Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.117586 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:19 crc kubenswrapper[4770]: E1209 14:25:19.117911 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 14:25:19.617872408 +0000 UTC m=+151.514074714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.137432 4770 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.137495 4770 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.186789 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wdlml" event={"ID":"826b4024-9fd1-4457-95f4-13dfd107b12b","Type":"ContainerStarted","Data":"77146ab475dd84074a32f4a8b0345ffe5a7e83894e324d900eb2f959e6f98c36"} Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.311104 4770 patch_prober.go:28] interesting pod/apiserver-76f77b778f-q46jb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 09 14:25:19 crc kubenswrapper[4770]: [+]log ok Dec 09 14:25:19 crc kubenswrapper[4770]: [+]etcd ok Dec 09 14:25:19 crc kubenswrapper[4770]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 09 14:25:19 crc kubenswrapper[4770]: [+]poststarthook/generic-apiserver-start-informers ok Dec 09 14:25:19 crc kubenswrapper[4770]: [-]poststarthook/max-in-flight-filter failed: reason withheld Dec 09 14:25:19 crc kubenswrapper[4770]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 09 14:25:19 crc kubenswrapper[4770]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 09 14:25:19 crc kubenswrapper[4770]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 09 14:25:19 crc kubenswrapper[4770]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 09 14:25:19 crc kubenswrapper[4770]: [+]poststarthook/project.openshift.io-projectcache ok Dec 09 14:25:19 crc kubenswrapper[4770]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 09 14:25:19 crc kubenswrapper[4770]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Dec 09 14:25:19 crc kubenswrapper[4770]: [-]poststarthook/openshift.io-restmapperupdater failed: reason withheld Dec 09 14:25:19 crc kubenswrapper[4770]: [-]poststarthook/quota.openshift.io-clusterquotamapping failed: reason withheld Dec 09 14:25:19 crc kubenswrapper[4770]: livez check failed Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.312008 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" podUID="d377b023-282a-4a7f-a2fb-d944873c3bbb" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.317943 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k96sv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.318085 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" podUID="ad3f94de-e874-4a00-81f7-2de81795621a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.328599 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8v5jt"] Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.331451 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:19 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:19 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:19 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.331521 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.550252 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.553401 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nhncv"] Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.554643 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.649172 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.669194 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.669263 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.685216 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.685710 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.742747 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8v5jt"] Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.801876 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nhncv"] Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.805169 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8lkmb"] Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.830807 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-utilities\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.831854 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-utilities\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.831990 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-catalog-content\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.832048 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-catalog-content\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.832076 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mk9t\" (UniqueName: \"kubernetes.io/projected/869de9e6-0d73-42f6-bf6a-49cc26a84531-kube-api-access-5mk9t\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.832123 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5v4\" (UniqueName: \"kubernetes.io/projected/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-kube-api-access-wn5v4\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.838147 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.869670 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kfhcl"] Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.871247 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.919066 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lkmb"] Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941190 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-utilities\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941251 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-catalog-content\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941288 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-utilities\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941323 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-catalog-content\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941353 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crffh\" (UniqueName: \"kubernetes.io/projected/d6f083cd-d57a-4162-a704-37cd9dd3be45-kube-api-access-crffh\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941382 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk9t\" (UniqueName: \"kubernetes.io/projected/869de9e6-0d73-42f6-bf6a-49cc26a84531-kube-api-access-5mk9t\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941425 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5v4\" (UniqueName: \"kubernetes.io/projected/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-kube-api-access-wn5v4\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941460 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-utilities\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941499 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9j9\" (UniqueName: \"kubernetes.io/projected/008593f8-0fc2-4138-abc9-ca200aef7426-kube-api-access-mw9j9\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941517 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-catalog-content\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941545 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-utilities\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941564 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-catalog-content\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.941917 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-utilities\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.944212 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-catalog-content\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.944645 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-catalog-content\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.944750 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-utilities\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.949810 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kfhcl"] Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.958701 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ztlqj\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.983675 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5v4\" (UniqueName: \"kubernetes.io/projected/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-kube-api-access-wn5v4\") pod \"certified-operators-nhncv\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:19 crc kubenswrapper[4770]: I1209 14:25:19.996678 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mk9t\" (UniqueName: \"kubernetes.io/projected/869de9e6-0d73-42f6-bf6a-49cc26a84531-kube-api-access-5mk9t\") pod \"community-operators-8v5jt\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.018448 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.077950 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.078480 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-utilities\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.078610 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crffh\" (UniqueName: \"kubernetes.io/projected/d6f083cd-d57a-4162-a704-37cd9dd3be45-kube-api-access-crffh\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.078789 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-utilities\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.078902 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9j9\" (UniqueName: \"kubernetes.io/projected/008593f8-0fc2-4138-abc9-ca200aef7426-kube-api-access-mw9j9\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.079024 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-catalog-content\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.079221 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-catalog-content\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.080203 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-catalog-content\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.081335 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-catalog-content\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.081551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-utilities\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.097502 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.098042 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-utilities\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.117157 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9j9\" (UniqueName: \"kubernetes.io/projected/008593f8-0fc2-4138-abc9-ca200aef7426-kube-api-access-mw9j9\") pod \"community-operators-8lkmb\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.130385 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crffh\" (UniqueName: \"kubernetes.io/projected/d6f083cd-d57a-4162-a704-37cd9dd3be45-kube-api-access-crffh\") pod \"certified-operators-kfhcl\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.172240 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.192398 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.204463 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wdlml" event={"ID":"826b4024-9fd1-4457-95f4-13dfd107b12b","Type":"ContainerStarted","Data":"6e22162449e0c25db29be7e365c02592dcb12e62320ec8b2b2ef8eab6e96d725"} Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.207694 4770 generic.go:334] "Generic (PLEG): container finished" podID="07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086" containerID="430ea0cf72b2ab03a618fadf887b3a95a3522f4d82b2e7252511d1c7ed91db98" exitCode=0 Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.207721 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" event={"ID":"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086","Type":"ContainerDied","Data":"430ea0cf72b2ab03a618fadf887b3a95a3522f4d82b2e7252511d1c7ed91db98"} Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.235911 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.236167 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wdlml" podStartSLOduration=16.236142626 podStartE2EDuration="16.236142626s" podCreationTimestamp="2025-12-09 14:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:20.234718576 +0000 UTC m=+152.130920712" watchObservedRunningTime="2025-12-09 14:25:20.236142626 +0000 UTC m=+152.132344762" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.254442 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.267335 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:20 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:20 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:20 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.267418 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.620491 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 09 14:25:20 crc kubenswrapper[4770]: I1209 14:25:20.749459 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8v5jt"] Dec 09 14:25:20 crc kubenswrapper[4770]: W1209 14:25:20.821962 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869de9e6_0d73_42f6_bf6a_49cc26a84531.slice/crio-8ad845283511bed07f96a688f028d60d9d1a42863e5e350f9e13df4868307c94 WatchSource:0}: Error finding container 8ad845283511bed07f96a688f028d60d9d1a42863e5e350f9e13df4868307c94: Status 404 returned error can't find the container with id 8ad845283511bed07f96a688f028d60d9d1a42863e5e350f9e13df4868307c94 Dec 09 14:25:21 crc kubenswrapper[4770]: W1209 14:25:21.028144 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57ed857f_e806_40ad_bd78_4aecbfc24699.slice/crio-c4179adf33eac3b58eb20d18662067c80e1aa8b3f2359d70f063e832474bfb0d WatchSource:0}: Error finding container c4179adf33eac3b58eb20d18662067c80e1aa8b3f2359d70f063e832474bfb0d: Status 404 returned error can't find the container with id c4179adf33eac3b58eb20d18662067c80e1aa8b3f2359d70f063e832474bfb0d Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.030104 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ztlqj"] Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.050541 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kfhcl"] Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.064283 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lkmb"] Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.094487 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nhncv"] Dec 09 14:25:21 crc kubenswrapper[4770]: W1209 14:25:21.105831 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58ab865b_2f32_439d_8e32_db4f8b4a6e2b.slice/crio-31f2e40aebb073e161a37d140db14f3a90d1d4c935ee0943f4adc76a60e3f72a WatchSource:0}: Error finding container 31f2e40aebb073e161a37d140db14f3a90d1d4c935ee0943f4adc76a60e3f72a: Status 404 returned error can't find the container with id 31f2e40aebb073e161a37d140db14f3a90d1d4c935ee0943f4adc76a60e3f72a Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.220429 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v5jt" event={"ID":"869de9e6-0d73-42f6-bf6a-49cc26a84531","Type":"ContainerStarted","Data":"8ad845283511bed07f96a688f028d60d9d1a42863e5e350f9e13df4868307c94"} Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.221710 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lkmb" event={"ID":"008593f8-0fc2-4138-abc9-ca200aef7426","Type":"ContainerStarted","Data":"c49ab7788cae45903084f40ad51b881a7aee359020e31ce0863abd5a7114d99d"} Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.225130 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nhncv" event={"ID":"58ab865b-2f32-439d-8e32-db4f8b4a6e2b","Type":"ContainerStarted","Data":"31f2e40aebb073e161a37d140db14f3a90d1d4c935ee0943f4adc76a60e3f72a"} Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.226773 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfhcl" event={"ID":"d6f083cd-d57a-4162-a704-37cd9dd3be45","Type":"ContainerStarted","Data":"45a711551c0cd9fd076090450b4034a62d20c2dad82218d30dd37029e7b96f60"} Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.227026 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jqg9c"] Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.228037 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.228484 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" event={"ID":"57ed857f-e806-40ad-bd78-4aecbfc24699","Type":"ContainerStarted","Data":"c4179adf33eac3b58eb20d18662067c80e1aa8b3f2359d70f063e832474bfb0d"} Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.229627 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.240445 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqg9c"] Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.263633 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:21 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:21 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:21 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.263797 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.320573 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.327844 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-q46jb" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.351506 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n989l\" (UniqueName: \"kubernetes.io/projected/65767399-1491-44ab-8df8-ce71adea95c3-kube-api-access-n989l\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.351574 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-utilities\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.351630 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-catalog-content\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.452981 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n989l\" (UniqueName: \"kubernetes.io/projected/65767399-1491-44ab-8df8-ce71adea95c3-kube-api-access-n989l\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.453063 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-utilities\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.453165 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-catalog-content\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.454533 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-utilities\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.455168 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-catalog-content\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.463754 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.491379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n989l\" (UniqueName: \"kubernetes.io/projected/65767399-1491-44ab-8df8-ce71adea95c3-kube-api-access-n989l\") pod \"redhat-marketplace-jqg9c\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.497288 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.551615 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.554151 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8cqh\" (UniqueName: \"kubernetes.io/projected/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-kube-api-access-c8cqh\") pod \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.554269 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-secret-volume\") pod \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.554338 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-config-volume\") pod \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\" (UID: \"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086\") " Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.555993 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-config-volume" (OuterVolumeSpecName: "config-volume") pod "07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086" (UID: "07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.558358 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-kube-api-access-c8cqh" (OuterVolumeSpecName: "kube-api-access-c8cqh") pod "07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086" (UID: "07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086"). InnerVolumeSpecName "kube-api-access-c8cqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.562048 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086" (UID: "07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.583633 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 14:25:21 crc kubenswrapper[4770]: E1209 14:25:21.583973 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086" containerName="collect-profiles" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.583994 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086" containerName="collect-profiles" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.584098 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086" containerName="collect-profiles" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.584586 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.589974 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.590048 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.609460 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.638646 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cqdll"] Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.640027 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.648749 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqdll"] Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.656327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/588fa8f5-af70-4e0e-aba7-9a66953decd0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"588fa8f5-af70-4e0e-aba7-9a66953decd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.656449 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/588fa8f5-af70-4e0e-aba7-9a66953decd0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"588fa8f5-af70-4e0e-aba7-9a66953decd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.659751 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8cqh\" (UniqueName: \"kubernetes.io/projected/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-kube-api-access-c8cqh\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.659793 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.659810 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.762002 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/588fa8f5-af70-4e0e-aba7-9a66953decd0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"588fa8f5-af70-4e0e-aba7-9a66953decd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.762081 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-utilities\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.762162 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/588fa8f5-af70-4e0e-aba7-9a66953decd0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"588fa8f5-af70-4e0e-aba7-9a66953decd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.762189 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5phs\" (UniqueName: \"kubernetes.io/projected/6a25b549-8e55-47e1-ba51-781119aefc25-kube-api-access-f5phs\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.762220 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-catalog-content\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.763149 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/588fa8f5-af70-4e0e-aba7-9a66953decd0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"588fa8f5-af70-4e0e-aba7-9a66953decd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.786314 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/588fa8f5-af70-4e0e-aba7-9a66953decd0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"588fa8f5-af70-4e0e-aba7-9a66953decd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.863416 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5phs\" (UniqueName: \"kubernetes.io/projected/6a25b549-8e55-47e1-ba51-781119aefc25-kube-api-access-f5phs\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.863488 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-catalog-content\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.863546 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-utilities\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.864180 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-utilities\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.864801 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-catalog-content\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.889057 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqg9c"] Dec 09 14:25:21 crc kubenswrapper[4770]: W1209 14:25:21.894401 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65767399_1491_44ab_8df8_ce71adea95c3.slice/crio-8affefca770dea0fa5c24ddd7801b9b6c94fe29538b20a79ebf538bdd3cd35c7 WatchSource:0}: Error finding container 8affefca770dea0fa5c24ddd7801b9b6c94fe29538b20a79ebf538bdd3cd35c7: Status 404 returned error can't find the container with id 8affefca770dea0fa5c24ddd7801b9b6c94fe29538b20a79ebf538bdd3cd35c7 Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.899890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5phs\" (UniqueName: \"kubernetes.io/projected/6a25b549-8e55-47e1-ba51-781119aefc25-kube-api-access-f5phs\") pod \"redhat-marketplace-cqdll\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.930415 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:21 crc kubenswrapper[4770]: I1209 14:25:21.969173 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:25:22 crc kubenswrapper[4770]: I1209 14:25:22.235910 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" event={"ID":"07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086","Type":"ContainerDied","Data":"fb865cf88d7e4281099d4db56dcd65f4263ee929d3866a62b1e8613bd6cc1bef"} Dec 09 14:25:22 crc kubenswrapper[4770]: I1209 14:25:22.235971 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb865cf88d7e4281099d4db56dcd65f4263ee929d3866a62b1e8613bd6cc1bef" Dec 09 14:25:22 crc kubenswrapper[4770]: I1209 14:25:22.236046 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7" Dec 09 14:25:22 crc kubenswrapper[4770]: I1209 14:25:22.237305 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqg9c" event={"ID":"65767399-1491-44ab-8df8-ce71adea95c3","Type":"ContainerStarted","Data":"8affefca770dea0fa5c24ddd7801b9b6c94fe29538b20a79ebf538bdd3cd35c7"} Dec 09 14:25:22 crc kubenswrapper[4770]: I1209 14:25:22.255711 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:22 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:22 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:22 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:22 crc kubenswrapper[4770]: I1209 14:25:22.255874 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.257619 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:23 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:23 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:23 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.258286 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.261331 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.263321 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqdll"] Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.271712 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nhncv" event={"ID":"58ab865b-2f32-439d-8e32-db4f8b4a6e2b","Type":"ContainerStarted","Data":"2ad23a44bded63e3ac6ec8b90dc363b6e694dc42f7e875a53aa9063ff0a37ce6"} Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.279485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfhcl" event={"ID":"d6f083cd-d57a-4162-a704-37cd9dd3be45","Type":"ContainerStarted","Data":"f39ba7ab6e253302df4446391bd8ee8d632aa4ee7a943fe225c83cb19d20ed5c"} Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.286429 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" event={"ID":"57ed857f-e806-40ad-bd78-4aecbfc24699","Type":"ContainerStarted","Data":"356ba5c73e740389aa3f9b907d7384040deee6880f93f3a2daaa414523f3dae9"} Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.286572 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.292649 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v5jt" event={"ID":"869de9e6-0d73-42f6-bf6a-49cc26a84531","Type":"ContainerStarted","Data":"69ccdcb76bed81b841654566c692e460563751544c96fe51dd26d406730614b6"} Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.299622 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.302979 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqg9c" event={"ID":"65767399-1491-44ab-8df8-ce71adea95c3","Type":"ContainerStarted","Data":"cda69458c4e03cdc464d21e3b4210c10a756556fbc4d341b17b223c5e86c7403"} Dec 09 14:25:23 crc kubenswrapper[4770]: W1209 14:25:23.317109 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a25b549_8e55_47e1_ba51_781119aefc25.slice/crio-aea267640d9f3b70438e5404f36336b5b4306f3d27b629dcb54cc26f582e6f74 WatchSource:0}: Error finding container aea267640d9f3b70438e5404f36336b5b4306f3d27b629dcb54cc26f582e6f74: Status 404 returned error can't find the container with id aea267640d9f3b70438e5404f36336b5b4306f3d27b629dcb54cc26f582e6f74 Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.328103 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lkmb" event={"ID":"008593f8-0fc2-4138-abc9-ca200aef7426","Type":"ContainerStarted","Data":"3b2292cf9cee3d5e69ea77ac870ad27085c7f3f6b8a4fb9c98e555fd5adb7add"} Dec 09 14:25:23 crc kubenswrapper[4770]: W1209 14:25:23.331557 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod588fa8f5_af70_4e0e_aba7_9a66953decd0.slice/crio-13bacf698c53c20469a2ad6f6d693ceb3ee10f3cf37504a712c6d8412c01db4a WatchSource:0}: Error finding container 13bacf698c53c20469a2ad6f6d693ceb3ee10f3cf37504a712c6d8412c01db4a: Status 404 returned error can't find the container with id 13bacf698c53c20469a2ad6f6d693ceb3ee10f3cf37504a712c6d8412c01db4a Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.385074 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-97p4f"] Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.386152 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.392068 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.445559 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97p4f"] Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.522070 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ks2lq"] Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.524993 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.595255 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jbf\" (UniqueName: \"kubernetes.io/projected/b4d654c7-6c1a-49dc-86b6-d756afafe480-kube-api-access-v8jbf\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.595351 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-utilities\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.595414 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-catalog-content\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.595473 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gwjl\" (UniqueName: \"kubernetes.io/projected/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-kube-api-access-6gwjl\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.595510 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-utilities\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.595540 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-catalog-content\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.601988 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ks2lq"] Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.709177 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gwjl\" (UniqueName: \"kubernetes.io/projected/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-kube-api-access-6gwjl\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.709265 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-utilities\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.709312 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-catalog-content\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.709345 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jbf\" (UniqueName: \"kubernetes.io/projected/b4d654c7-6c1a-49dc-86b6-d756afafe480-kube-api-access-v8jbf\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.709381 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-utilities\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.709418 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-catalog-content\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.711326 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-catalog-content\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.724058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-utilities\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.730983 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-catalog-content\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.735149 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-utilities\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.756773 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" podStartSLOduration=134.756757592 podStartE2EDuration="2m14.756757592s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:23.75630419 +0000 UTC m=+155.652506326" watchObservedRunningTime="2025-12-09 14:25:23.756757592 +0000 UTC m=+155.652959728" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.780379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jbf\" (UniqueName: \"kubernetes.io/projected/b4d654c7-6c1a-49dc-86b6-d756afafe480-kube-api-access-v8jbf\") pod \"redhat-operators-97p4f\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:23 crc kubenswrapper[4770]: I1209 14:25:23.786372 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gwjl\" (UniqueName: \"kubernetes.io/projected/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-kube-api-access-6gwjl\") pod \"redhat-operators-ks2lq\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.052146 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.072854 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.271026 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:24 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:24 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:24 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.271094 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.283332 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.284200 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.288130 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.288747 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.300660 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.389303 4770 generic.go:334] "Generic (PLEG): container finished" podID="65767399-1491-44ab-8df8-ce71adea95c3" containerID="cda69458c4e03cdc464d21e3b4210c10a756556fbc4d341b17b223c5e86c7403" exitCode=0 Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.389418 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqg9c" event={"ID":"65767399-1491-44ab-8df8-ce71adea95c3","Type":"ContainerDied","Data":"cda69458c4e03cdc464d21e3b4210c10a756556fbc4d341b17b223c5e86c7403"} Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.391153 4770 generic.go:334] "Generic (PLEG): container finished" podID="008593f8-0fc2-4138-abc9-ca200aef7426" containerID="3b2292cf9cee3d5e69ea77ac870ad27085c7f3f6b8a4fb9c98e555fd5adb7add" exitCode=0 Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.391246 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lkmb" event={"ID":"008593f8-0fc2-4138-abc9-ca200aef7426","Type":"ContainerDied","Data":"3b2292cf9cee3d5e69ea77ac870ad27085c7f3f6b8a4fb9c98e555fd5adb7add"} Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.396288 4770 generic.go:334] "Generic (PLEG): container finished" podID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerID="2ad23a44bded63e3ac6ec8b90dc363b6e694dc42f7e875a53aa9063ff0a37ce6" exitCode=0 Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.396359 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nhncv" event={"ID":"58ab865b-2f32-439d-8e32-db4f8b4a6e2b","Type":"ContainerDied","Data":"2ad23a44bded63e3ac6ec8b90dc363b6e694dc42f7e875a53aa9063ff0a37ce6"} Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.398755 4770 generic.go:334] "Generic (PLEG): container finished" podID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerID="f39ba7ab6e253302df4446391bd8ee8d632aa4ee7a943fe225c83cb19d20ed5c" exitCode=0 Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.398850 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfhcl" event={"ID":"d6f083cd-d57a-4162-a704-37cd9dd3be45","Type":"ContainerDied","Data":"f39ba7ab6e253302df4446391bd8ee8d632aa4ee7a943fe225c83cb19d20ed5c"} Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.421592 4770 generic.go:334] "Generic (PLEG): container finished" podID="6a25b549-8e55-47e1-ba51-781119aefc25" containerID="046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466" exitCode=0 Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.421766 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqdll" event={"ID":"6a25b549-8e55-47e1-ba51-781119aefc25","Type":"ContainerDied","Data":"046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466"} Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.421809 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqdll" event={"ID":"6a25b549-8e55-47e1-ba51-781119aefc25","Type":"ContainerStarted","Data":"aea267640d9f3b70438e5404f36336b5b4306f3d27b629dcb54cc26f582e6f74"} Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.428604 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa320579-0034-46ff-ad3c-bd40e67937c7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa320579-0034-46ff-ad3c-bd40e67937c7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.428652 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa320579-0034-46ff-ad3c-bd40e67937c7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa320579-0034-46ff-ad3c-bd40e67937c7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.436468 4770 generic.go:334] "Generic (PLEG): container finished" podID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerID="69ccdcb76bed81b841654566c692e460563751544c96fe51dd26d406730614b6" exitCode=0 Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.436561 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v5jt" event={"ID":"869de9e6-0d73-42f6-bf6a-49cc26a84531","Type":"ContainerDied","Data":"69ccdcb76bed81b841654566c692e460563751544c96fe51dd26d406730614b6"} Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.439434 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"588fa8f5-af70-4e0e-aba7-9a66953decd0","Type":"ContainerStarted","Data":"13bacf698c53c20469a2ad6f6d693ceb3ee10f3cf37504a712c6d8412c01db4a"} Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.476299 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ks2lq"] Dec 09 14:25:24 crc kubenswrapper[4770]: W1209 14:25:24.491868 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e24b2d_ecd7_4d6b_9dac_cf6ddcdf0824.slice/crio-128bc728fd44eeb4284852ee1baae2e45ed552aa3ec1dc6c02440032d3f360cf WatchSource:0}: Error finding container 128bc728fd44eeb4284852ee1baae2e45ed552aa3ec1dc6c02440032d3f360cf: Status 404 returned error can't find the container with id 128bc728fd44eeb4284852ee1baae2e45ed552aa3ec1dc6c02440032d3f360cf Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.535924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa320579-0034-46ff-ad3c-bd40e67937c7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa320579-0034-46ff-ad3c-bd40e67937c7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.535999 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa320579-0034-46ff-ad3c-bd40e67937c7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa320579-0034-46ff-ad3c-bd40e67937c7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.538669 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa320579-0034-46ff-ad3c-bd40e67937c7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa320579-0034-46ff-ad3c-bd40e67937c7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.566901 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa320579-0034-46ff-ad3c-bd40e67937c7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa320579-0034-46ff-ad3c-bd40e67937c7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.618152 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:24 crc kubenswrapper[4770]: W1209 14:25:24.779272 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4d654c7_6c1a_49dc_86b6_d756afafe480.slice/crio-13f73e016bb5f2e79f759922ba1f3c3a99400119916a4c08bc3581228bbb85b6 WatchSource:0}: Error finding container 13f73e016bb5f2e79f759922ba1f3c3a99400119916a4c08bc3581228bbb85b6: Status 404 returned error can't find the container with id 13f73e016bb5f2e79f759922ba1f3c3a99400119916a4c08bc3581228bbb85b6 Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.780205 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97p4f"] Dec 09 14:25:24 crc kubenswrapper[4770]: I1209 14:25:24.974018 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.030413 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9rzmw" Dec 09 14:25:25 crc kubenswrapper[4770]: W1209 14:25:25.168425 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfa320579_0034_46ff_ad3c_bd40e67937c7.slice/crio-d81ce611beb33773c977fb3dfd39ae01e5184839e09b5830b13690b3a61d52c8 WatchSource:0}: Error finding container d81ce611beb33773c977fb3dfd39ae01e5184839e09b5830b13690b3a61d52c8: Status 404 returned error can't find the container with id d81ce611beb33773c977fb3dfd39ae01e5184839e09b5830b13690b3a61d52c8 Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.271455 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:25 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:25 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:25 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.271558 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.505567 4770 generic.go:334] "Generic (PLEG): container finished" podID="588fa8f5-af70-4e0e-aba7-9a66953decd0" containerID="7a0cfdb2d20315f02edf5e5142c34689e0d6bfb9340c269daaddb6ea9b8869b5" exitCode=0 Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.505869 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"588fa8f5-af70-4e0e-aba7-9a66953decd0","Type":"ContainerDied","Data":"7a0cfdb2d20315f02edf5e5142c34689e0d6bfb9340c269daaddb6ea9b8869b5"} Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.508886 4770 generic.go:334] "Generic (PLEG): container finished" podID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerID="04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd" exitCode=0 Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.508979 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97p4f" event={"ID":"b4d654c7-6c1a-49dc-86b6-d756afafe480","Type":"ContainerDied","Data":"04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd"} Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.509028 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97p4f" event={"ID":"b4d654c7-6c1a-49dc-86b6-d756afafe480","Type":"ContainerStarted","Data":"13f73e016bb5f2e79f759922ba1f3c3a99400119916a4c08bc3581228bbb85b6"} Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.518036 4770 generic.go:334] "Generic (PLEG): container finished" podID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerID="fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7" exitCode=0 Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.518117 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks2lq" event={"ID":"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824","Type":"ContainerDied","Data":"fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7"} Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.518155 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks2lq" event={"ID":"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824","Type":"ContainerStarted","Data":"128bc728fd44eeb4284852ee1baae2e45ed552aa3ec1dc6c02440032d3f360cf"} Dec 09 14:25:25 crc kubenswrapper[4770]: I1209 14:25:25.521123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa320579-0034-46ff-ad3c-bd40e67937c7","Type":"ContainerStarted","Data":"d81ce611beb33773c977fb3dfd39ae01e5184839e09b5830b13690b3a61d52c8"} Dec 09 14:25:26 crc kubenswrapper[4770]: I1209 14:25:26.261668 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:26 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:26 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:26 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:26 crc kubenswrapper[4770]: I1209 14:25:26.262351 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:26 crc kubenswrapper[4770]: I1209 14:25:26.415334 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:26 crc kubenswrapper[4770]: I1209 14:25:26.415418 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:26 crc kubenswrapper[4770]: I1209 14:25:26.415559 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:26 crc kubenswrapper[4770]: I1209 14:25:26.415656 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:26 crc kubenswrapper[4770]: I1209 14:25:26.547210 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa320579-0034-46ff-ad3c-bd40e67937c7","Type":"ContainerStarted","Data":"f5632cdb0669397df5ecf4d2069da1996ea0e9e894f2322e54b295790f8ec5cd"} Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.157780 4770 patch_prober.go:28] interesting pod/console-f9d7485db-x2gs6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.158348 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x2gs6" podUID="c1822a0a-3dcd-455f-a11c-15c6171f2068" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.193047 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.215987 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.215972699 podStartE2EDuration="3.215972699s" podCreationTimestamp="2025-12-09 14:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:25:26.575771046 +0000 UTC m=+158.471973182" watchObservedRunningTime="2025-12-09 14:25:27.215972699 +0000 UTC m=+159.112174835" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.261071 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:27 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:27 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:27 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.261141 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.306288 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k96sv" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.329618 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/588fa8f5-af70-4e0e-aba7-9a66953decd0-kubelet-dir\") pod \"588fa8f5-af70-4e0e-aba7-9a66953decd0\" (UID: \"588fa8f5-af70-4e0e-aba7-9a66953decd0\") " Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.329841 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/588fa8f5-af70-4e0e-aba7-9a66953decd0-kube-api-access\") pod \"588fa8f5-af70-4e0e-aba7-9a66953decd0\" (UID: \"588fa8f5-af70-4e0e-aba7-9a66953decd0\") " Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.333894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/588fa8f5-af70-4e0e-aba7-9a66953decd0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "588fa8f5-af70-4e0e-aba7-9a66953decd0" (UID: "588fa8f5-af70-4e0e-aba7-9a66953decd0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.347641 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588fa8f5-af70-4e0e-aba7-9a66953decd0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "588fa8f5-af70-4e0e-aba7-9a66953decd0" (UID: "588fa8f5-af70-4e0e-aba7-9a66953decd0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.431014 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/588fa8f5-af70-4e0e-aba7-9a66953decd0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.431050 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/588fa8f5-af70-4e0e-aba7-9a66953decd0-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.520647 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.625364 4770 generic.go:334] "Generic (PLEG): container finished" podID="fa320579-0034-46ff-ad3c-bd40e67937c7" containerID="f5632cdb0669397df5ecf4d2069da1996ea0e9e894f2322e54b295790f8ec5cd" exitCode=0 Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.625441 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa320579-0034-46ff-ad3c-bd40e67937c7","Type":"ContainerDied","Data":"f5632cdb0669397df5ecf4d2069da1996ea0e9e894f2322e54b295790f8ec5cd"} Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.632189 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"588fa8f5-af70-4e0e-aba7-9a66953decd0","Type":"ContainerDied","Data":"13bacf698c53c20469a2ad6f6d693ceb3ee10f3cf37504a712c6d8412c01db4a"} Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.632235 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13bacf698c53c20469a2ad6f6d693ceb3ee10f3cf37504a712c6d8412c01db4a" Dec 09 14:25:27 crc kubenswrapper[4770]: I1209 14:25:27.632233 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 14:25:28 crc kubenswrapper[4770]: I1209 14:25:28.259574 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:28 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Dec 09 14:25:28 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:28 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:28 crc kubenswrapper[4770]: I1209 14:25:28.259637 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.257059 4770 patch_prober.go:28] interesting pod/router-default-5444994796-9n5nz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:25:29 crc kubenswrapper[4770]: [+]has-synced ok Dec 09 14:25:29 crc kubenswrapper[4770]: [+]process-running ok Dec 09 14:25:29 crc kubenswrapper[4770]: healthz check failed Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.259172 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9n5nz" podUID="08a04cc9-6982-4f5d-84c9-7a9c875d5a1b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.421110 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.495712 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa320579-0034-46ff-ad3c-bd40e67937c7-kube-api-access\") pod \"fa320579-0034-46ff-ad3c-bd40e67937c7\" (UID: \"fa320579-0034-46ff-ad3c-bd40e67937c7\") " Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.495802 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa320579-0034-46ff-ad3c-bd40e67937c7-kubelet-dir\") pod \"fa320579-0034-46ff-ad3c-bd40e67937c7\" (UID: \"fa320579-0034-46ff-ad3c-bd40e67937c7\") " Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.496032 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa320579-0034-46ff-ad3c-bd40e67937c7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa320579-0034-46ff-ad3c-bd40e67937c7" (UID: "fa320579-0034-46ff-ad3c-bd40e67937c7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.496461 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa320579-0034-46ff-ad3c-bd40e67937c7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.506575 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa320579-0034-46ff-ad3c-bd40e67937c7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa320579-0034-46ff-ad3c-bd40e67937c7" (UID: "fa320579-0034-46ff-ad3c-bd40e67937c7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.607713 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa320579-0034-46ff-ad3c-bd40e67937c7-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.714930 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.714709 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa320579-0034-46ff-ad3c-bd40e67937c7","Type":"ContainerDied","Data":"d81ce611beb33773c977fb3dfd39ae01e5184839e09b5830b13690b3a61d52c8"} Dec 09 14:25:29 crc kubenswrapper[4770]: I1209 14:25:29.715035 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d81ce611beb33773c977fb3dfd39ae01e5184839e09b5830b13690b3a61d52c8" Dec 09 14:25:30 crc kubenswrapper[4770]: I1209 14:25:30.262246 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:30 crc kubenswrapper[4770]: I1209 14:25:30.265866 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9n5nz" Dec 09 14:25:31 crc kubenswrapper[4770]: I1209 14:25:31.864401 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:25:31 crc kubenswrapper[4770]: I1209 14:25:31.869561 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98b4e85f-5bbb-40a6-a03a-c775e971ed85-metrics-certs\") pod \"network-metrics-daemon-b7jh8\" (UID: \"98b4e85f-5bbb-40a6-a03a-c775e971ed85\") " pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:25:31 crc kubenswrapper[4770]: I1209 14:25:31.970046 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b7jh8" Dec 09 14:25:32 crc kubenswrapper[4770]: I1209 14:25:32.669050 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-b7jh8"] Dec 09 14:25:32 crc kubenswrapper[4770]: W1209 14:25:32.717947 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98b4e85f_5bbb_40a6_a03a_c775e971ed85.slice/crio-8ffcfc67fd37514b87dd8941b8a6cb2b8606b76e13238605897085b57be37c8a WatchSource:0}: Error finding container 8ffcfc67fd37514b87dd8941b8a6cb2b8606b76e13238605897085b57be37c8a: Status 404 returned error can't find the container with id 8ffcfc67fd37514b87dd8941b8a6cb2b8606b76e13238605897085b57be37c8a Dec 09 14:25:32 crc kubenswrapper[4770]: I1209 14:25:32.925841 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" event={"ID":"98b4e85f-5bbb-40a6-a03a-c775e971ed85","Type":"ContainerStarted","Data":"8ffcfc67fd37514b87dd8941b8a6cb2b8606b76e13238605897085b57be37c8a"} Dec 09 14:25:35 crc kubenswrapper[4770]: I1209 14:25:35.943998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" event={"ID":"98b4e85f-5bbb-40a6-a03a-c775e971ed85","Type":"ContainerStarted","Data":"846054e12323b32c65153d1473f9d6f4f939811ae04df518f7c25f243c203a5e"} Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.416227 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.416356 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.419017 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.420260 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.420399 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.421752 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.421863 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.421835 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9ee5fb35ddb4799007ec512df3327b5f79fed6b834ad7514891c3d9d37e54883"} pod="openshift-console/downloads-7954f5f757-q979p" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 09 14:25:36 crc kubenswrapper[4770]: I1209 14:25:36.421988 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" containerID="cri-o://9ee5fb35ddb4799007ec512df3327b5f79fed6b834ad7514891c3d9d37e54883" gracePeriod=2 Dec 09 14:25:37 crc kubenswrapper[4770]: I1209 14:25:37.199841 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:37 crc kubenswrapper[4770]: I1209 14:25:37.208332 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:25:37 crc kubenswrapper[4770]: I1209 14:25:37.975789 4770 generic.go:334] "Generic (PLEG): container finished" podID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerID="9ee5fb35ddb4799007ec512df3327b5f79fed6b834ad7514891c3d9d37e54883" exitCode=0 Dec 09 14:25:37 crc kubenswrapper[4770]: I1209 14:25:37.976337 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q979p" event={"ID":"5a7d357a-d7de-4c54-b2a2-caa7fb5f7904","Type":"ContainerDied","Data":"9ee5fb35ddb4799007ec512df3327b5f79fed6b834ad7514891c3d9d37e54883"} Dec 09 14:25:40 crc kubenswrapper[4770]: I1209 14:25:40.024702 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:25:44 crc kubenswrapper[4770]: I1209 14:25:44.243902 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:25:44 crc kubenswrapper[4770]: I1209 14:25:44.244288 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:25:46 crc kubenswrapper[4770]: I1209 14:25:46.417257 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:46 crc kubenswrapper[4770]: I1209 14:25:46.418071 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:46 crc kubenswrapper[4770]: I1209 14:25:46.947260 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s2p7g" Dec 09 14:25:56 crc kubenswrapper[4770]: I1209 14:25:56.416137 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:25:56 crc kubenswrapper[4770]: I1209 14:25:56.417433 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.474972 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 14:25:57 crc kubenswrapper[4770]: E1209 14:25:57.475589 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588fa8f5-af70-4e0e-aba7-9a66953decd0" containerName="pruner" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.475625 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="588fa8f5-af70-4e0e-aba7-9a66953decd0" containerName="pruner" Dec 09 14:25:57 crc kubenswrapper[4770]: E1209 14:25:57.475675 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa320579-0034-46ff-ad3c-bd40e67937c7" containerName="pruner" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.475696 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa320579-0034-46ff-ad3c-bd40e67937c7" containerName="pruner" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.476005 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa320579-0034-46ff-ad3c-bd40e67937c7" containerName="pruner" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.476045 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="588fa8f5-af70-4e0e-aba7-9a66953decd0" containerName="pruner" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.477084 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.481284 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.486254 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.499321 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.678262 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d1587c-28a1-4902-b351-83c79968fc6b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c1d1587c-28a1-4902-b351-83c79968fc6b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.678334 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d1587c-28a1-4902-b351-83c79968fc6b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c1d1587c-28a1-4902-b351-83c79968fc6b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.779535 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d1587c-28a1-4902-b351-83c79968fc6b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c1d1587c-28a1-4902-b351-83c79968fc6b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.779578 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d1587c-28a1-4902-b351-83c79968fc6b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c1d1587c-28a1-4902-b351-83c79968fc6b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:25:57 crc kubenswrapper[4770]: I1209 14:25:57.779618 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d1587c-28a1-4902-b351-83c79968fc6b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c1d1587c-28a1-4902-b351-83c79968fc6b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:26:00 crc kubenswrapper[4770]: E1209 14:26:00.046054 4770 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.458s" Dec 09 14:26:00 crc kubenswrapper[4770]: I1209 14:26:00.046470 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 14:26:00 crc kubenswrapper[4770]: I1209 14:26:00.055006 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d1587c-28a1-4902-b351-83c79968fc6b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c1d1587c-28a1-4902-b351-83c79968fc6b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:26:00 crc kubenswrapper[4770]: I1209 14:26:00.346638 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.664818 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.666117 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.694422 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.832422 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.832517 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-var-lock\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.832568 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/281a8b39-472a-418d-ae38-157d4f2710e4-kube-api-access\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.934130 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/281a8b39-472a-418d-ae38-157d4f2710e4-kube-api-access\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.934328 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.934401 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-var-lock\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.934527 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.934545 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-var-lock\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.954886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/281a8b39-472a-418d-ae38-157d4f2710e4-kube-api-access\") pod \"installer-9-crc\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:01 crc kubenswrapper[4770]: I1209 14:26:01.998072 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:26:06 crc kubenswrapper[4770]: I1209 14:26:06.436048 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:26:06 crc kubenswrapper[4770]: I1209 14:26:06.436458 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:26:14 crc kubenswrapper[4770]: I1209 14:26:14.244373 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:26:14 crc kubenswrapper[4770]: I1209 14:26:14.245332 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:26:14 crc kubenswrapper[4770]: I1209 14:26:14.245426 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:26:14 crc kubenswrapper[4770]: I1209 14:26:14.246612 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:26:14 crc kubenswrapper[4770]: I1209 14:26:14.246696 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f" gracePeriod=600 Dec 09 14:26:14 crc kubenswrapper[4770]: E1209 14:26:14.655468 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 14:26:14 crc kubenswrapper[4770]: E1209 14:26:14.655996 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mk9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8v5jt_openshift-marketplace(869de9e6-0d73-42f6-bf6a-49cc26a84531): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:26:14 crc kubenswrapper[4770]: E1209 14:26:14.657487 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8v5jt" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" Dec 09 14:26:14 crc kubenswrapper[4770]: E1209 14:26:14.695736 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 14:26:14 crc kubenswrapper[4770]: E1209 14:26:14.695924 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw9j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8lkmb_openshift-marketplace(008593f8-0fc2-4138-abc9-ca200aef7426): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:26:14 crc kubenswrapper[4770]: E1209 14:26:14.697189 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8lkmb" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" Dec 09 14:26:15 crc kubenswrapper[4770]: I1209 14:26:15.482693 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f" exitCode=0 Dec 09 14:26:15 crc kubenswrapper[4770]: I1209 14:26:15.483812 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f"} Dec 09 14:26:16 crc kubenswrapper[4770]: E1209 14:26:16.208705 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8lkmb" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" Dec 09 14:26:16 crc kubenswrapper[4770]: E1209 14:26:16.208790 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8v5jt" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" Dec 09 14:26:16 crc kubenswrapper[4770]: E1209 14:26:16.294005 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 14:26:16 crc kubenswrapper[4770]: E1209 14:26:16.294898 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n989l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jqg9c_openshift-marketplace(65767399-1491-44ab-8df8-ce71adea95c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:26:16 crc kubenswrapper[4770]: E1209 14:26:16.297051 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jqg9c" podUID="65767399-1491-44ab-8df8-ce71adea95c3" Dec 09 14:26:16 crc kubenswrapper[4770]: I1209 14:26:16.415324 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:26:16 crc kubenswrapper[4770]: I1209 14:26:16.415379 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.019085 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jqg9c" podUID="65767399-1491-44ab-8df8-ce71adea95c3" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.092748 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.093008 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5phs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cqdll_openshift-marketplace(6a25b549-8e55-47e1-ba51-781119aefc25): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.094568 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cqdll" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.107620 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.107852 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crffh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kfhcl_openshift-marketplace(d6f083cd-d57a-4162-a704-37cd9dd3be45): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.109305 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kfhcl" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.135141 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.135407 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn5v4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nhncv_openshift-marketplace(58ab865b-2f32-439d-8e32-db4f8b4a6e2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:26:18 crc kubenswrapper[4770]: E1209 14:26:18.136991 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nhncv" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.551474 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cqdll" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.552196 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nhncv" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.552313 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kfhcl" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.640466 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.640679 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8jbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-97p4f_openshift-marketplace(b4d654c7-6c1a-49dc-86b6-d756afafe480): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.641948 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-97p4f" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.712698 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.713333 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gwjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ks2lq_openshift-marketplace(67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:26:21 crc kubenswrapper[4770]: E1209 14:26:21.715673 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ks2lq" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" Dec 09 14:26:21 crc kubenswrapper[4770]: I1209 14:26:21.848383 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 14:26:22 crc kubenswrapper[4770]: I1209 14:26:22.077710 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 14:26:22 crc kubenswrapper[4770]: I1209 14:26:22.527361 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q979p" event={"ID":"5a7d357a-d7de-4c54-b2a2-caa7fb5f7904","Type":"ContainerStarted","Data":"b5fa7c1d37a0f92f6d0878e1610f2366d3379716fee02dd18e5d34ea07342ae9"} Dec 09 14:26:22 crc kubenswrapper[4770]: I1209 14:26:22.527706 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:26:22 crc kubenswrapper[4770]: I1209 14:26:22.528580 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:26:22 crc kubenswrapper[4770]: I1209 14:26:22.528662 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:26:22 crc kubenswrapper[4770]: I1209 14:26:22.529175 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"281a8b39-472a-418d-ae38-157d4f2710e4","Type":"ContainerStarted","Data":"a0e0d277b4ec1367f606b326d7e698bb6b7c9132a470acd934398f18d907b998"} Dec 09 14:26:22 crc kubenswrapper[4770]: I1209 14:26:22.544666 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"e8d1eda564365c5c920f110d0bb1f391b787b6130b8ab2f01b19986d8be82924"} Dec 09 14:26:22 crc kubenswrapper[4770]: E1209 14:26:22.548836 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ks2lq" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" Dec 09 14:26:22 crc kubenswrapper[4770]: E1209 14:26:22.553363 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-97p4f" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" Dec 09 14:26:23 crc kubenswrapper[4770]: I1209 14:26:23.550945 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c1d1587c-28a1-4902-b351-83c79968fc6b","Type":"ContainerStarted","Data":"d3b7a5cdcc19bef64cda1d5151cf19680cffe3679a2a4b6913621be55e335428"} Dec 09 14:26:23 crc kubenswrapper[4770]: I1209 14:26:23.551896 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:26:23 crc kubenswrapper[4770]: I1209 14:26:23.551938 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:26:24 crc kubenswrapper[4770]: I1209 14:26:24.558337 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b7jh8" event={"ID":"98b4e85f-5bbb-40a6-a03a-c775e971ed85","Type":"ContainerStarted","Data":"cd8a0dc5989438226c300dc528e0db18a2f21980b1e96c9862341915c78133f1"} Dec 09 14:26:26 crc kubenswrapper[4770]: I1209 14:26:26.416014 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:26:26 crc kubenswrapper[4770]: I1209 14:26:26.416362 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:26:26 crc kubenswrapper[4770]: I1209 14:26:26.416064 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:26:26 crc kubenswrapper[4770]: I1209 14:26:26.416661 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:26:26 crc kubenswrapper[4770]: I1209 14:26:26.586585 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"281a8b39-472a-418d-ae38-157d4f2710e4","Type":"ContainerStarted","Data":"34708836111b23fc629c9bb1d643bfbe71067c4a39385c3676646ad1c6220818"} Dec 09 14:26:26 crc kubenswrapper[4770]: I1209 14:26:26.598028 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c1d1587c-28a1-4902-b351-83c79968fc6b","Type":"ContainerStarted","Data":"5fd763f87567e8fa9c98100487d2a63976384d3ee25add423cfa826d0496bf5d"} Dec 09 14:26:26 crc kubenswrapper[4770]: I1209 14:26:26.612899 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-b7jh8" podStartSLOduration=197.612874587 podStartE2EDuration="3m17.612874587s" podCreationTimestamp="2025-12-09 14:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:26:26.609415357 +0000 UTC m=+218.505617523" watchObservedRunningTime="2025-12-09 14:26:26.612874587 +0000 UTC m=+218.509076713" Dec 09 14:26:27 crc kubenswrapper[4770]: I1209 14:26:27.623748 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=26.623708379 podStartE2EDuration="26.623708379s" podCreationTimestamp="2025-12-09 14:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:26:27.621907043 +0000 UTC m=+219.518109179" watchObservedRunningTime="2025-12-09 14:26:27.623708379 +0000 UTC m=+219.519910515" Dec 09 14:26:28 crc kubenswrapper[4770]: I1209 14:26:28.160872 4770 patch_prober.go:28] interesting pod/console-f9d7485db-x2gs6 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:26:28 crc kubenswrapper[4770]: I1209 14:26:28.160936 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-x2gs6" podUID="c1822a0a-3dcd-455f-a11c-15c6171f2068" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 14:26:29 crc kubenswrapper[4770]: I1209 14:26:29.613521 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lkmb" event={"ID":"008593f8-0fc2-4138-abc9-ca200aef7426","Type":"ContainerStarted","Data":"70ddf3098f8f9eafac548057e485c569e3ec371c00acd92118f0597d750013a6"} Dec 09 14:26:29 crc kubenswrapper[4770]: I1209 14:26:29.615699 4770 generic.go:334] "Generic (PLEG): container finished" podID="c1d1587c-28a1-4902-b351-83c79968fc6b" containerID="5fd763f87567e8fa9c98100487d2a63976384d3ee25add423cfa826d0496bf5d" exitCode=0 Dec 09 14:26:29 crc kubenswrapper[4770]: I1209 14:26:29.615787 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c1d1587c-28a1-4902-b351-83c79968fc6b","Type":"ContainerDied","Data":"5fd763f87567e8fa9c98100487d2a63976384d3ee25add423cfa826d0496bf5d"} Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.464859 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.598036 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d1587c-28a1-4902-b351-83c79968fc6b-kubelet-dir\") pod \"c1d1587c-28a1-4902-b351-83c79968fc6b\" (UID: \"c1d1587c-28a1-4902-b351-83c79968fc6b\") " Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.598119 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d1587c-28a1-4902-b351-83c79968fc6b-kube-api-access\") pod \"c1d1587c-28a1-4902-b351-83c79968fc6b\" (UID: \"c1d1587c-28a1-4902-b351-83c79968fc6b\") " Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.598158 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1d1587c-28a1-4902-b351-83c79968fc6b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c1d1587c-28a1-4902-b351-83c79968fc6b" (UID: "c1d1587c-28a1-4902-b351-83c79968fc6b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.598706 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d1587c-28a1-4902-b351-83c79968fc6b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.607692 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d1587c-28a1-4902-b351-83c79968fc6b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c1d1587c-28a1-4902-b351-83c79968fc6b" (UID: "c1d1587c-28a1-4902-b351-83c79968fc6b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.699476 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d1587c-28a1-4902-b351-83c79968fc6b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.861285 4770 generic.go:334] "Generic (PLEG): container finished" podID="008593f8-0fc2-4138-abc9-ca200aef7426" containerID="70ddf3098f8f9eafac548057e485c569e3ec371c00acd92118f0597d750013a6" exitCode=0 Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.861374 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lkmb" event={"ID":"008593f8-0fc2-4138-abc9-ca200aef7426","Type":"ContainerDied","Data":"70ddf3098f8f9eafac548057e485c569e3ec371c00acd92118f0597d750013a6"} Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.863219 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c1d1587c-28a1-4902-b351-83c79968fc6b","Type":"ContainerDied","Data":"d3b7a5cdcc19bef64cda1d5151cf19680cffe3679a2a4b6913621be55e335428"} Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.863240 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3b7a5cdcc19bef64cda1d5151cf19680cffe3679a2a4b6913621be55e335428" Dec 09 14:26:31 crc kubenswrapper[4770]: I1209 14:26:31.863281 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 14:26:36 crc kubenswrapper[4770]: I1209 14:26:36.415795 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:26:36 crc kubenswrapper[4770]: I1209 14:26:36.415795 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-q979p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 09 14:26:36 crc kubenswrapper[4770]: I1209 14:26:36.416775 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:26:36 crc kubenswrapper[4770]: I1209 14:26:36.416710 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q979p" podUID="5a7d357a-d7de-4c54-b2a2-caa7fb5f7904" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 09 14:26:36 crc kubenswrapper[4770]: I1209 14:26:36.709990 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22k76"] Dec 09 14:26:39 crc kubenswrapper[4770]: I1209 14:26:39.914383 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v5jt" event={"ID":"869de9e6-0d73-42f6-bf6a-49cc26a84531","Type":"ContainerStarted","Data":"d3ce2bf3cc8bf701ea249d77b862786c732369dc5d73f139f6f7f8bbf8c19b70"} Dec 09 14:26:39 crc kubenswrapper[4770]: I1209 14:26:39.919017 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lkmb" event={"ID":"008593f8-0fc2-4138-abc9-ca200aef7426","Type":"ContainerStarted","Data":"aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1"} Dec 09 14:26:40 crc kubenswrapper[4770]: I1209 14:26:40.955175 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8lkmb" podStartSLOduration=6.668057212 podStartE2EDuration="1m21.955119625s" podCreationTimestamp="2025-12-09 14:25:19 +0000 UTC" firstStartedPulling="2025-12-09 14:25:23.335401474 +0000 UTC m=+155.231603620" lastFinishedPulling="2025-12-09 14:26:38.622463907 +0000 UTC m=+230.518666033" observedRunningTime="2025-12-09 14:26:40.954571308 +0000 UTC m=+232.850773474" watchObservedRunningTime="2025-12-09 14:26:40.955119625 +0000 UTC m=+232.851321761" Dec 09 14:26:41 crc kubenswrapper[4770]: I1209 14:26:41.934530 4770 generic.go:334] "Generic (PLEG): container finished" podID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerID="d3ce2bf3cc8bf701ea249d77b862786c732369dc5d73f139f6f7f8bbf8c19b70" exitCode=0 Dec 09 14:26:41 crc kubenswrapper[4770]: I1209 14:26:41.934629 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v5jt" event={"ID":"869de9e6-0d73-42f6-bf6a-49cc26a84531","Type":"ContainerDied","Data":"d3ce2bf3cc8bf701ea249d77b862786c732369dc5d73f139f6f7f8bbf8c19b70"} Dec 09 14:26:42 crc kubenswrapper[4770]: I1209 14:26:42.947476 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqg9c" event={"ID":"65767399-1491-44ab-8df8-ce71adea95c3","Type":"ContainerStarted","Data":"c55a7efaa67cdd00f1e3ebc0c22d3a6c9ed90387d12b5f5b76c5513c09ea6a2b"} Dec 09 14:26:42 crc kubenswrapper[4770]: I1209 14:26:42.965247 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97p4f" event={"ID":"b4d654c7-6c1a-49dc-86b6-d756afafe480","Type":"ContainerStarted","Data":"8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f"} Dec 09 14:26:42 crc kubenswrapper[4770]: I1209 14:26:42.992549 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nhncv" event={"ID":"58ab865b-2f32-439d-8e32-db4f8b4a6e2b","Type":"ContainerStarted","Data":"c504ef2861bf3cdef074f443a16692110d2d4723100141852c4c5506753b4184"} Dec 09 14:26:42 crc kubenswrapper[4770]: I1209 14:26:42.995291 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks2lq" event={"ID":"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824","Type":"ContainerStarted","Data":"b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0"} Dec 09 14:26:42 crc kubenswrapper[4770]: I1209 14:26:42.997488 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfhcl" event={"ID":"d6f083cd-d57a-4162-a704-37cd9dd3be45","Type":"ContainerStarted","Data":"cf529e3d167f79fa87c1552bf1904a9dc7f5494d4d60022fd115cb42263f9d1e"} Dec 09 14:26:43 crc kubenswrapper[4770]: I1209 14:26:43.010285 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqdll" event={"ID":"6a25b549-8e55-47e1-ba51-781119aefc25","Type":"ContainerStarted","Data":"4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8"} Dec 09 14:26:46 crc kubenswrapper[4770]: I1209 14:26:46.423174 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-q979p" Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.071181 4770 generic.go:334] "Generic (PLEG): container finished" podID="65767399-1491-44ab-8df8-ce71adea95c3" containerID="c55a7efaa67cdd00f1e3ebc0c22d3a6c9ed90387d12b5f5b76c5513c09ea6a2b" exitCode=0 Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.071528 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqg9c" event={"ID":"65767399-1491-44ab-8df8-ce71adea95c3","Type":"ContainerDied","Data":"c55a7efaa67cdd00f1e3ebc0c22d3a6c9ed90387d12b5f5b76c5513c09ea6a2b"} Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.086821 4770 generic.go:334] "Generic (PLEG): container finished" podID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerID="c504ef2861bf3cdef074f443a16692110d2d4723100141852c4c5506753b4184" exitCode=0 Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.086950 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nhncv" event={"ID":"58ab865b-2f32-439d-8e32-db4f8b4a6e2b","Type":"ContainerDied","Data":"c504ef2861bf3cdef074f443a16692110d2d4723100141852c4c5506753b4184"} Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.102989 4770 generic.go:334] "Generic (PLEG): container finished" podID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerID="cf529e3d167f79fa87c1552bf1904a9dc7f5494d4d60022fd115cb42263f9d1e" exitCode=0 Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.103098 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfhcl" event={"ID":"d6f083cd-d57a-4162-a704-37cd9dd3be45","Type":"ContainerDied","Data":"cf529e3d167f79fa87c1552bf1904a9dc7f5494d4d60022fd115cb42263f9d1e"} Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.111577 4770 generic.go:334] "Generic (PLEG): container finished" podID="6a25b549-8e55-47e1-ba51-781119aefc25" containerID="4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8" exitCode=0 Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.111674 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqdll" event={"ID":"6a25b549-8e55-47e1-ba51-781119aefc25","Type":"ContainerDied","Data":"4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8"} Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.115417 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v5jt" event={"ID":"869de9e6-0d73-42f6-bf6a-49cc26a84531","Type":"ContainerStarted","Data":"a4e876ded02e4e2921f6198c0891662902a5539dba3649757e6adad24452d0ac"} Dec 09 14:26:47 crc kubenswrapper[4770]: I1209 14:26:47.153851 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8v5jt" podStartSLOduration=8.486062087 podStartE2EDuration="1m28.153830589s" podCreationTimestamp="2025-12-09 14:25:19 +0000 UTC" firstStartedPulling="2025-12-09 14:25:23.299189834 +0000 UTC m=+155.195391970" lastFinishedPulling="2025-12-09 14:26:42.966958336 +0000 UTC m=+234.863160472" observedRunningTime="2025-12-09 14:26:47.152216838 +0000 UTC m=+239.048418974" watchObservedRunningTime="2025-12-09 14:26:47.153830589 +0000 UTC m=+239.050032735" Dec 09 14:26:50 crc kubenswrapper[4770]: I1209 14:26:50.173331 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:26:50 crc kubenswrapper[4770]: I1209 14:26:50.173792 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:26:50 crc kubenswrapper[4770]: I1209 14:26:50.237072 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:26:50 crc kubenswrapper[4770]: I1209 14:26:50.237143 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:26:50 crc kubenswrapper[4770]: I1209 14:26:50.919321 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:26:50 crc kubenswrapper[4770]: I1209 14:26:50.920391 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:26:51 crc kubenswrapper[4770]: I1209 14:26:51.137769 4770 generic.go:334] "Generic (PLEG): container finished" podID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerID="8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f" exitCode=0 Dec 09 14:26:51 crc kubenswrapper[4770]: I1209 14:26:51.137867 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97p4f" event={"ID":"b4d654c7-6c1a-49dc-86b6-d756afafe480","Type":"ContainerDied","Data":"8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f"} Dec 09 14:26:51 crc kubenswrapper[4770]: I1209 14:26:51.141367 4770 generic.go:334] "Generic (PLEG): container finished" podID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerID="b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0" exitCode=0 Dec 09 14:26:51 crc kubenswrapper[4770]: I1209 14:26:51.141600 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks2lq" event={"ID":"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824","Type":"ContainerDied","Data":"b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0"} Dec 09 14:26:51 crc kubenswrapper[4770]: I1209 14:26:51.185326 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:26:51 crc kubenswrapper[4770]: I1209 14:26:51.196654 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:26:53 crc kubenswrapper[4770]: I1209 14:26:53.186128 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lkmb"] Dec 09 14:26:53 crc kubenswrapper[4770]: I1209 14:26:53.186682 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8lkmb" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="registry-server" containerID="cri-o://aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" gracePeriod=2 Dec 09 14:26:57 crc kubenswrapper[4770]: I1209 14:26:57.186692 4770 generic.go:334] "Generic (PLEG): container finished" podID="008593f8-0fc2-4138-abc9-ca200aef7426" containerID="aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" exitCode=0 Dec 09 14:26:57 crc kubenswrapper[4770]: I1209 14:26:57.186821 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lkmb" event={"ID":"008593f8-0fc2-4138-abc9-ca200aef7426","Type":"ContainerDied","Data":"aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1"} Dec 09 14:27:00 crc kubenswrapper[4770]: E1209 14:27:00.237897 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found" containerID="aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:27:00 crc kubenswrapper[4770]: E1209 14:27:00.239939 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found" containerID="aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:27:00 crc kubenswrapper[4770]: E1209 14:27:00.240636 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found" containerID="aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:27:00 crc kubenswrapper[4770]: E1209 14:27:00.240766 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-8lkmb" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="registry-server" Dec 09 14:27:01 crc kubenswrapper[4770]: I1209 14:27:01.852699 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" containerName="oauth-openshift" containerID="cri-o://95713e9c091a713b8e7c617848ee374f89105d75043bf593c49b219764c22d97" gracePeriod=15 Dec 09 14:27:06 crc kubenswrapper[4770]: I1209 14:27:06.766853 4770 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-22k76 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 09 14:27:06 crc kubenswrapper[4770]: I1209 14:27:06.767362 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.352864 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:27:07 crc kubenswrapper[4770]: E1209 14:27:07.353134 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d1587c-28a1-4902-b351-83c79968fc6b" containerName="pruner" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.353154 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d1587c-28a1-4902-b351-83c79968fc6b" containerName="pruner" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.353257 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d1587c-28a1-4902-b351-83c79968fc6b" containerName="pruner" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.353602 4770 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.353915 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0" gracePeriod=15 Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.353968 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.354008 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679" gracePeriod=15 Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.354054 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d" gracePeriod=15 Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.354082 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b" gracePeriod=15 Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.353935 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795" gracePeriod=15 Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356388 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:27:07 crc kubenswrapper[4770]: E1209 14:27:07.356711 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356747 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:27:07 crc kubenswrapper[4770]: E1209 14:27:07.356760 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356766 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 14:27:07 crc kubenswrapper[4770]: E1209 14:27:07.356785 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356792 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 14:27:07 crc kubenswrapper[4770]: E1209 14:27:07.356801 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356807 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 14:27:07 crc kubenswrapper[4770]: E1209 14:27:07.356818 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356825 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 14:27:07 crc kubenswrapper[4770]: E1209 14:27:07.356836 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356842 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356941 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356954 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356964 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356977 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.356984 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 14:27:07 crc kubenswrapper[4770]: E1209 14:27:07.406542 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.507582 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.507651 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.507976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.508028 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.508127 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.508179 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.508207 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.508241 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.609944 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610070 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610073 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610101 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610157 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610166 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610173 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610185 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610221 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610285 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610358 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610427 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610493 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.610502 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:07 crc kubenswrapper[4770]: I1209 14:27:07.708130 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:08 crc kubenswrapper[4770]: I1209 14:27:08.590866 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:09 crc kubenswrapper[4770]: I1209 14:27:09.266771 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 14:27:09 crc kubenswrapper[4770]: I1209 14:27:09.267739 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d" exitCode=2 Dec 09 14:27:09 crc kubenswrapper[4770]: E1209 14:27:09.447494 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:09 crc kubenswrapper[4770]: E1209 14:27:09.447754 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:09 crc kubenswrapper[4770]: E1209 14:27:09.448042 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:09 crc kubenswrapper[4770]: E1209 14:27:09.448595 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:09 crc kubenswrapper[4770]: E1209 14:27:09.448871 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:09 crc kubenswrapper[4770]: I1209 14:27:09.448893 4770 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 14:27:09 crc kubenswrapper[4770]: E1209 14:27:09.449070 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="200ms" Dec 09 14:27:09 crc kubenswrapper[4770]: E1209 14:27:09.650122 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="400ms" Dec 09 14:27:10 crc kubenswrapper[4770]: E1209 14:27:10.051077 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="800ms" Dec 09 14:27:10 crc kubenswrapper[4770]: E1209 14:27:10.237264 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found" containerID="aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:27:10 crc kubenswrapper[4770]: E1209 14:27:10.237868 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found" containerID="aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:27:10 crc kubenswrapper[4770]: E1209 14:27:10.238162 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found" containerID="aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:27:10 crc kubenswrapper[4770]: E1209 14:27:10.238198 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-8lkmb" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="registry-server" Dec 09 14:27:10 crc kubenswrapper[4770]: E1209 14:27:10.238782 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-8lkmb.187f9248c7da8b4e\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-8lkmb.187f9248c7da8b4e openshift-marketplace 29386 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-8lkmb,UID:008593f8-0fc2-4138-abc9-ca200aef7426,APIVersion:v1,ResourceVersion:28459,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:27:00 +0000 UTC,LastTimestamp:2025-12-09 14:27:10.238230007 +0000 UTC m=+262.134432143,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:27:10 crc kubenswrapper[4770]: I1209 14:27:10.274963 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 14:27:10 crc kubenswrapper[4770]: I1209 14:27:10.275476 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b" exitCode=0 Dec 09 14:27:10 crc kubenswrapper[4770]: E1209 14:27:10.852043 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="1.6s" Dec 09 14:27:11 crc kubenswrapper[4770]: I1209 14:27:11.284855 4770 generic.go:334] "Generic (PLEG): container finished" podID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" containerID="95713e9c091a713b8e7c617848ee374f89105d75043bf593c49b219764c22d97" exitCode=0 Dec 09 14:27:11 crc kubenswrapper[4770]: I1209 14:27:11.284968 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" event={"ID":"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6","Type":"ContainerDied","Data":"95713e9c091a713b8e7c617848ee374f89105d75043bf593c49b219764c22d97"} Dec 09 14:27:12 crc kubenswrapper[4770]: I1209 14:27:12.294584 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 14:27:12 crc kubenswrapper[4770]: I1209 14:27:12.296095 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795" exitCode=0 Dec 09 14:27:12 crc kubenswrapper[4770]: E1209 14:27:12.453867 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="3.2s" Dec 09 14:27:13 crc kubenswrapper[4770]: I1209 14:27:13.304383 4770 generic.go:334] "Generic (PLEG): container finished" podID="281a8b39-472a-418d-ae38-157d4f2710e4" containerID="34708836111b23fc629c9bb1d643bfbe71067c4a39385c3676646ad1c6220818" exitCode=0 Dec 09 14:27:13 crc kubenswrapper[4770]: I1209 14:27:13.304456 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"281a8b39-472a-418d-ae38-157d4f2710e4","Type":"ContainerDied","Data":"34708836111b23fc629c9bb1d643bfbe71067c4a39385c3676646ad1c6220818"} Dec 09 14:27:13 crc kubenswrapper[4770]: I1209 14:27:13.305286 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:13 crc kubenswrapper[4770]: I1209 14:27:13.308616 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 14:27:13 crc kubenswrapper[4770]: I1209 14:27:13.309448 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679" exitCode=0 Dec 09 14:27:14 crc kubenswrapper[4770]: I1209 14:27:14.317548 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 14:27:14 crc kubenswrapper[4770]: I1209 14:27:14.318281 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0" exitCode=0 Dec 09 14:27:15 crc kubenswrapper[4770]: E1209 14:27:15.655413 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="6.4s" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.298713 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.299676 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.300012 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.339800 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lkmb" event={"ID":"008593f8-0fc2-4138-abc9-ca200aef7426","Type":"ContainerDied","Data":"c49ab7788cae45903084f40ad51b881a7aee359020e31ce0863abd5a7114d99d"} Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.339875 4770 scope.go:117] "RemoveContainer" containerID="aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.339878 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lkmb" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.341019 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.341407 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.372680 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-utilities\") pod \"008593f8-0fc2-4138-abc9-ca200aef7426\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.372818 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw9j9\" (UniqueName: \"kubernetes.io/projected/008593f8-0fc2-4138-abc9-ca200aef7426-kube-api-access-mw9j9\") pod \"008593f8-0fc2-4138-abc9-ca200aef7426\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.373016 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-catalog-content\") pod \"008593f8-0fc2-4138-abc9-ca200aef7426\" (UID: \"008593f8-0fc2-4138-abc9-ca200aef7426\") " Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.373803 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-utilities" (OuterVolumeSpecName: "utilities") pod "008593f8-0fc2-4138-abc9-ca200aef7426" (UID: "008593f8-0fc2-4138-abc9-ca200aef7426"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.378355 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008593f8-0fc2-4138-abc9-ca200aef7426-kube-api-access-mw9j9" (OuterVolumeSpecName: "kube-api-access-mw9j9") pod "008593f8-0fc2-4138-abc9-ca200aef7426" (UID: "008593f8-0fc2-4138-abc9-ca200aef7426"). InnerVolumeSpecName "kube-api-access-mw9j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.429803 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008593f8-0fc2-4138-abc9-ca200aef7426" (UID: "008593f8-0fc2-4138-abc9-ca200aef7426"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.475075 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.475104 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw9j9\" (UniqueName: \"kubernetes.io/projected/008593f8-0fc2-4138-abc9-ca200aef7426-kube-api-access-mw9j9\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.475136 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008593f8-0fc2-4138-abc9-ca200aef7426-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.656963 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.658112 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.767694 4770 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-22k76 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: i/o timeout" start-of-body= Dec 09 14:27:17 crc kubenswrapper[4770]: I1209 14:27:17.767807 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: i/o timeout" Dec 09 14:27:18 crc kubenswrapper[4770]: E1209 14:27:18.217550 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-8lkmb.187f9248c7da8b4e\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-8lkmb.187f9248c7da8b4e openshift-marketplace 29386 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-8lkmb,UID:008593f8-0fc2-4138-abc9-ca200aef7426,APIVersion:v1,ResourceVersion:28459,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:27:00 +0000 UTC,LastTimestamp:2025-12-09 14:27:10.238230007 +0000 UTC m=+262.134432143,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:27:18 crc kubenswrapper[4770]: I1209 14:27:18.591508 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:18 crc kubenswrapper[4770]: I1209 14:27:18.591896 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:22 crc kubenswrapper[4770]: E1209 14:27:22.056941 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="7s" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.870563 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.871848 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.872267 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.872992 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.877581 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.878178 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.878693 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.879219 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.986702 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/281a8b39-472a-418d-ae38-157d4f2710e4-kube-api-access\") pod \"281a8b39-472a-418d-ae38-157d4f2710e4\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.986861 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-ocp-branding-template\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.986956 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-serving-cert\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987013 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-provider-selection\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987052 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-policies\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987090 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-kubelet-dir\") pod \"281a8b39-472a-418d-ae38-157d4f2710e4\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987125 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-error\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987166 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-router-certs\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987197 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-dir\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987231 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-cliconfig\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987268 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-trusted-ca-bundle\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987307 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-var-lock\") pod \"281a8b39-472a-418d-ae38-157d4f2710e4\" (UID: \"281a8b39-472a-418d-ae38-157d4f2710e4\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987367 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-idp-0-file-data\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987409 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbrf2\" (UniqueName: \"kubernetes.io/projected/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-kube-api-access-hbrf2\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987462 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-login\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987493 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-session\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987540 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-service-ca\") pod \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\" (UID: \"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6\") " Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.987935 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.988501 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.988552 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.988811 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-var-lock" (OuterVolumeSpecName: "var-lock") pod "281a8b39-472a-418d-ae38-157d4f2710e4" (UID: "281a8b39-472a-418d-ae38-157d4f2710e4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.989014 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.989108 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.989155 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "281a8b39-472a-418d-ae38-157d4f2710e4" (UID: "281a8b39-472a-418d-ae38-157d4f2710e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.992581 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281a8b39-472a-418d-ae38-157d4f2710e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "281a8b39-472a-418d-ae38-157d4f2710e4" (UID: "281a8b39-472a-418d-ae38-157d4f2710e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.993910 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.994225 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-kube-api-access-hbrf2" (OuterVolumeSpecName: "kube-api-access-hbrf2") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "kube-api-access-hbrf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.994527 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.994696 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.995071 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.995417 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.995798 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.996524 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:27:23 crc kubenswrapper[4770]: I1209 14:27:23.996624 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" (UID: "46b8b2a0-18e8-428c-a1a1-5494acf3ddb6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.089427 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.089989 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.090072 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.090452 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.090534 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.090598 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.090657 4770 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.090771 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.090861 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.090944 4770 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/281a8b39-472a-418d-ae38-157d4f2710e4-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.091019 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.091088 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbrf2\" (UniqueName: \"kubernetes.io/projected/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-kube-api-access-hbrf2\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.091165 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.091224 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.091281 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.091373 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/281a8b39-472a-418d-ae38-157d4f2710e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.091430 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.118565 4770 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.118640 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.384144 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"281a8b39-472a-418d-ae38-157d4f2710e4","Type":"ContainerDied","Data":"a0e0d277b4ec1367f606b326d7e698bb6b7c9132a470acd934398f18d907b998"} Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.384200 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e0d277b4ec1367f606b326d7e698bb6b7c9132a470acd934398f18d907b998" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.384211 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.385539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" event={"ID":"46b8b2a0-18e8-428c-a1a1-5494acf3ddb6","Type":"ContainerDied","Data":"59b6d7e7f13f8481cd3040012f5a07d814b92424e42a891fd689911573c9c171"} Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.385600 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.386379 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.386816 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.387115 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.400530 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.400787 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.401022 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.401301 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.401548 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:24 crc kubenswrapper[4770]: I1209 14:27:24.401806 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.395402 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.396063 4770 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e" exitCode=1 Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.396131 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e"} Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.396961 4770 scope.go:117] "RemoveContainer" containerID="8f4d48abbf01ad00c88683c01d817a00267df464bed26ebe539dcc3bb513f52e" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.397214 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.397770 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.398061 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.398385 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.642627 4770 scope.go:117] "RemoveContainer" containerID="70ddf3098f8f9eafac548057e485c569e3ec371c00acd92118f0597d750013a6" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.743499 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.744367 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.745004 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.745399 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.746053 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.746266 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.746480 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.812522 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.812849 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.812883 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.812911 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.812968 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.813035 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.813854 4770 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.813913 4770 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.813923 4770 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.843797 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:27:25 crc kubenswrapper[4770]: I1209 14:27:25.983806 4770 scope.go:117] "RemoveContainer" containerID="3b2292cf9cee3d5e69ea77ac870ad27085c7f3f6b8a4fb9c98e555fd5adb7add" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.126055 4770 scope.go:117] "RemoveContainer" containerID="95713e9c091a713b8e7c617848ee374f89105d75043bf593c49b219764c22d97" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.402930 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"460d68ae8e223c7e195b1be8080788e6f7b425f45131ad431750b8d6146d1dfc"} Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.405942 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.406772 4770 scope.go:117] "RemoveContainer" containerID="249e97351971a251b67bc3dbb20133306590a492b6dc3b77e4f7e575ea8ad795" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.406833 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.422540 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.423600 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.424021 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.424360 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.424706 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.434438 4770 scope.go:117] "RemoveContainer" containerID="cc2b5bd2e81a2480a0cd4e44b4bf42b25cc0f1f41531d2cb680335da8fc9cc6b" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.453953 4770 scope.go:117] "RemoveContainer" containerID="7d8cbb49821afd7f518e3b90f93bed84cbb8a175261f396c1ad3bbe674c66679" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.469664 4770 scope.go:117] "RemoveContainer" containerID="b98757c201ff3bab98dbb796a8edf290b6f3caf4b96f95d6c4c813a7a4376b2d" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.483596 4770 scope.go:117] "RemoveContainer" containerID="6de5c3bb06258260c649c5cb355c9d07274c5c79969a81ba8e608823c8791bd0" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.500724 4770 scope.go:117] "RemoveContainer" containerID="720b73e027d81667cebff5154609d1caed7ee203d043cf23e2f379a665fb520a" Dec 09 14:27:26 crc kubenswrapper[4770]: I1209 14:27:26.601001 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.418614 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nhncv" event={"ID":"58ab865b-2f32-439d-8e32-db4f8b4a6e2b","Type":"ContainerStarted","Data":"64a10799aab463aa00c4fc0fae7416dda1e30936eb4fb1ac7ee09147a9015fa4"} Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.419806 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.420085 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.420357 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97p4f" event={"ID":"b4d654c7-6c1a-49dc-86b6-d756afafe480","Type":"ContainerStarted","Data":"e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856"} Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.420569 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.420774 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.420948 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.421135 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.421284 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.421566 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.421893 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.422172 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks2lq" event={"ID":"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824","Type":"ContainerStarted","Data":"41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a"} Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.422177 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.422648 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.423041 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.423226 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.423469 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.423820 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.424210 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.424417 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.424606 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.426040 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.426145 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3b9b50bbd2e66602fd9c786914d1376607ab84446f7636d030dfe2d189b3bf90"} Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.426566 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.426876 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.427161 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.427388 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.427662 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.427949 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.428149 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.429359 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfhcl" event={"ID":"d6f083cd-d57a-4162-a704-37cd9dd3be45","Type":"ContainerStarted","Data":"87510a6667678ee2f5e94f9049d9e3341b066695fd62ba467a7c2059b27a67d2"} Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.429940 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.430137 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.430322 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.430487 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.430655 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.430874 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.431036 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.431183 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.431279 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqdll" event={"ID":"6a25b549-8e55-47e1-ba51-781119aefc25","Type":"ContainerStarted","Data":"fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337"} Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.431663 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.431874 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.432131 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.432461 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.432511 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2487b15a66bc07f6c5fb071f17ddc95e107953c11ecef760cc5702f730da253a"} Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.432741 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.433011 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: E1209 14:27:27.433023 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.433294 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.433575 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.433889 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.434285 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.434564 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.434857 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.435073 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.435274 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.435484 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.435718 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.436153 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.436528 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.438679 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqg9c" event={"ID":"65767399-1491-44ab-8df8-ce71adea95c3","Type":"ContainerStarted","Data":"62ce893bb29b4f0ac34e4ae97792edb449650c17097baf057c1f7c8ce38bbd26"} Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.439363 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.439739 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.440271 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.440564 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.440863 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.441120 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.441339 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.441642 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.442092 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.442328 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.979824 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.990367 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.990970 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.991198 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.991411 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.991638 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.991882 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.992124 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.992355 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.992596 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.992869 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:27 crc kubenswrapper[4770]: I1209 14:27:27.993106 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: E1209 14:27:28.219209 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-8lkmb.187f9248c7da8b4e\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-8lkmb.187f9248c7da8b4e openshift-marketplace 29386 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-8lkmb,UID:008593f8-0fc2-4138-abc9-ca200aef7426,APIVersion:v1,ResourceVersion:28459,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of aa6c5cc697956d5c4e4a03a5359578b8637882214ddc5032fd8e8e2e2a6eeef1 is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:27:00 +0000 UTC,LastTimestamp:2025-12-09 14:27:10.238230007 +0000 UTC m=+262.134432143,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.446889 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:27:28 crc kubenswrapper[4770]: E1209 14:27:28.447443 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.597776 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.602958 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.603590 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.603946 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.604223 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.604595 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.605113 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.605336 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.605602 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:28 crc kubenswrapper[4770]: I1209 14:27:28.605919 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:29 crc kubenswrapper[4770]: E1209 14:27:29.059024 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="7s" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.192804 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.192859 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.237237 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.238061 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.238476 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.238944 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.239201 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.239513 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.239872 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.240111 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.240419 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.240666 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.240922 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.255419 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.255480 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.334955 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.335573 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.335901 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.336285 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.336495 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.336693 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.337157 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.337781 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.338061 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.338395 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:30 crc kubenswrapper[4770]: I1209 14:27:30.338674 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.552619 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.552885 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.592679 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.593298 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.594060 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.594513 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.594819 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.595319 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.595590 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.595951 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.596240 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.596524 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.596888 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.969901 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:27:31 crc kubenswrapper[4770]: I1209 14:27:31.969989 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.008695 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.009373 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.010034 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.010699 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.011076 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.011391 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.011748 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.012056 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.012414 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.012697 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.013046 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.526285 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.527097 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.527625 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.527918 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.528195 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.528433 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.528695 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.528964 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.529228 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.529525 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.529810 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.531035 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.531407 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.531708 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.532072 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.532344 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.532614 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.532913 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.533151 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.533402 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.533720 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.533984 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.588395 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.591322 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.592919 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.593313 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.593599 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.594395 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.594984 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.595260 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.595463 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.595991 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.596767 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.606716 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="981b1471-9853-4b69-9ab9-f06555203c07" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.606806 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="981b1471-9853-4b69-9ab9-f06555203c07" Dec 09 14:27:32 crc kubenswrapper[4770]: E1209 14:27:32.607450 4770 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:32 crc kubenswrapper[4770]: I1209 14:27:32.608118 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.494403 4770 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f2aaafef868062b7f68cf54d88da7e79e5d2a4377895b10097b03a1841b6b281" exitCode=0 Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.494519 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f2aaafef868062b7f68cf54d88da7e79e5d2a4377895b10097b03a1841b6b281"} Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.494569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"db90cf4160cbcd481fc89580d0259133efd051b993dc1e5f0d52f72b404a9ece"} Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.495041 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="981b1471-9853-4b69-9ab9-f06555203c07" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.495062 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="981b1471-9853-4b69-9ab9-f06555203c07" Dec 09 14:27:33 crc kubenswrapper[4770]: E1209 14:27:33.495480 4770 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.495552 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.495865 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.496215 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.496544 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.496964 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.497289 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.497553 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.497845 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.498113 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:33 crc kubenswrapper[4770]: I1209 14:27:33.498360 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.053062 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.053164 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.074022 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.074118 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.098491 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.099057 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.099333 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.099541 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.099766 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.100020 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.100532 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.100814 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.101206 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.101468 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.101722 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.109283 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.109691 4770 status_manager.go:851] "Failed to get status for pod" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" pod="openshift-marketplace/redhat-marketplace-cqdll" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cqdll\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.110046 4770 status_manager.go:851] "Failed to get status for pod" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" pod="openshift-authentication/oauth-openshift-558db77b4-22k76" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-22k76\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.110947 4770 status_manager.go:851] "Failed to get status for pod" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" pod="openshift-marketplace/certified-operators-kfhcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kfhcl\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.111264 4770 status_manager.go:851] "Failed to get status for pod" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" pod="openshift-marketplace/certified-operators-nhncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nhncv\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.111644 4770 status_manager.go:851] "Failed to get status for pod" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" pod="openshift-marketplace/community-operators-8lkmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8lkmb\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.111908 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.112253 4770 status_manager.go:851] "Failed to get status for pod" podUID="65767399-1491-44ab-8df8-ce71adea95c3" pod="openshift-marketplace/redhat-marketplace-jqg9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jqg9c\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.112561 4770 status_manager.go:851] "Failed to get status for pod" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" pod="openshift-marketplace/redhat-operators-97p4f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97p4f\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.112841 4770 status_manager.go:851] "Failed to get status for pod" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.113155 4770 status_manager.go:851] "Failed to get status for pod" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" pod="openshift-marketplace/redhat-operators-ks2lq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ks2lq\": dial tcp 38.102.83.182:6443: connect: connection refused" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.503485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f097315c6d27d9a87f3bd8afdb4c8083e7f9452457a745b22a9f6766d608590e"} Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.550445 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:27:34 crc kubenswrapper[4770]: I1209 14:27:34.559652 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:27:35 crc kubenswrapper[4770]: I1209 14:27:35.512052 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7525f82893d643d7acbcead415f1ce020f3edaac1ba53bba4e4197ab4add9b22"} Dec 09 14:27:36 crc kubenswrapper[4770]: I1209 14:27:36.523320 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8335726462d5b504b67ca34280650120d3adb41c8fdf59e045dfc4590fdee9e6"} Dec 09 14:27:37 crc kubenswrapper[4770]: I1209 14:27:37.531160 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c5f4dbd9e8c5576964b36ea6f2885895a7edcfba314692f729b8f0dffb8d03d"} Dec 09 14:27:38 crc kubenswrapper[4770]: I1209 14:27:38.539178 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5db0dfe5a3c5e09c2923b7ebc829ffbb55b33693bdbbd633c0f178e4056667b"} Dec 09 14:27:38 crc kubenswrapper[4770]: I1209 14:27:38.539514 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:38 crc kubenswrapper[4770]: I1209 14:27:38.539412 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="981b1471-9853-4b69-9ab9-f06555203c07" Dec 09 14:27:38 crc kubenswrapper[4770]: I1209 14:27:38.539535 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="981b1471-9853-4b69-9ab9-f06555203c07" Dec 09 14:27:38 crc kubenswrapper[4770]: I1209 14:27:38.546694 4770 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:27:39 crc kubenswrapper[4770]: I1209 14:27:39.545031 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="981b1471-9853-4b69-9ab9-f06555203c07" Dec 09 14:27:39 crc kubenswrapper[4770]: I1209 14:27:39.545065 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="981b1471-9853-4b69-9ab9-f06555203c07" Dec 09 14:27:40 crc kubenswrapper[4770]: I1209 14:27:40.235749 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:27:40 crc kubenswrapper[4770]: I1209 14:27:40.312106 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:27:41 crc kubenswrapper[4770]: I1209 14:27:41.330634 4770 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a4e0fca3-0648-41c7-834f-e6a88f7ad55d" Dec 09 14:27:41 crc kubenswrapper[4770]: I1209 14:27:41.557684 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 09 14:27:41 crc kubenswrapper[4770]: I1209 14:27:41.558248 4770 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2" exitCode=1 Dec 09 14:27:41 crc kubenswrapper[4770]: I1209 14:27:41.558294 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2"} Dec 09 14:27:41 crc kubenswrapper[4770]: I1209 14:27:41.558763 4770 scope.go:117] "RemoveContainer" containerID="35e2edf9539cb7b7691313b4089a5adcddc0c03615b88dd955e82d6e3c1114c2" Dec 09 14:27:42 crc kubenswrapper[4770]: I1209 14:27:42.566336 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 09 14:27:42 crc kubenswrapper[4770]: I1209 14:27:42.567123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"116bb1a37a2f304ed3c49ec1438b0e54454a099cd348fa64fbd3bbc0d614fcc8"} Dec 09 14:27:44 crc kubenswrapper[4770]: I1209 14:27:44.120964 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:28:03 crc kubenswrapper[4770]: I1209 14:28:03.169032 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 14:28:08 crc kubenswrapper[4770]: I1209 14:28:08.023378 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 14:28:09 crc kubenswrapper[4770]: I1209 14:28:09.732142 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 14:28:10 crc kubenswrapper[4770]: I1209 14:28:10.285974 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 14:28:10 crc kubenswrapper[4770]: I1209 14:28:10.348676 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 14:28:10 crc kubenswrapper[4770]: I1209 14:28:10.531000 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 14:28:11 crc kubenswrapper[4770]: I1209 14:28:11.480118 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 14:28:11 crc kubenswrapper[4770]: I1209 14:28:11.615555 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 14:28:11 crc kubenswrapper[4770]: I1209 14:28:11.855259 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 14:28:12 crc kubenswrapper[4770]: I1209 14:28:12.172666 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 14:28:12 crc kubenswrapper[4770]: I1209 14:28:12.324890 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 14:28:12 crc kubenswrapper[4770]: I1209 14:28:12.485993 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 14:28:12 crc kubenswrapper[4770]: I1209 14:28:12.665413 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 14:28:12 crc kubenswrapper[4770]: I1209 14:28:12.928915 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.136591 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.282536 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.348965 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.435162 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.451763 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.478518 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.548298 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.710139 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 14:28:13 crc kubenswrapper[4770]: I1209 14:28:13.933076 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 14:28:14 crc kubenswrapper[4770]: I1209 14:28:14.126948 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 14:28:14 crc kubenswrapper[4770]: I1209 14:28:14.725532 4770 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 14:28:14 crc kubenswrapper[4770]: I1209 14:28:14.728270 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.023540 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.073863 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.271767 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.294106 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.378592 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.384434 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.477916 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.586902 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.609426 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 14:28:15 crc kubenswrapper[4770]: I1209 14:28:15.871574 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.016608 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.052151 4770 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.078849 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.281218 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.310152 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.326154 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.588877 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.667987 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 14:28:16 crc kubenswrapper[4770]: I1209 14:28:16.712071 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 14:28:17 crc kubenswrapper[4770]: I1209 14:28:17.095051 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 14:28:17 crc kubenswrapper[4770]: I1209 14:28:17.161156 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 14:28:17 crc kubenswrapper[4770]: I1209 14:28:17.223081 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 14:28:17 crc kubenswrapper[4770]: I1209 14:28:17.627884 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 14:28:18 crc kubenswrapper[4770]: I1209 14:28:18.022705 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 14:28:18 crc kubenswrapper[4770]: I1209 14:28:18.137795 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 14:28:18 crc kubenswrapper[4770]: I1209 14:28:18.180471 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 14:28:18 crc kubenswrapper[4770]: I1209 14:28:18.246662 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 14:28:18 crc kubenswrapper[4770]: I1209 14:28:18.590944 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 14:28:19 crc kubenswrapper[4770]: I1209 14:28:19.335129 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 14:28:19 crc kubenswrapper[4770]: I1209 14:28:19.388644 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 14:28:19 crc kubenswrapper[4770]: I1209 14:28:19.483937 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 14:28:19 crc kubenswrapper[4770]: I1209 14:28:19.627149 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 14:28:19 crc kubenswrapper[4770]: I1209 14:28:19.738648 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 14:28:19 crc kubenswrapper[4770]: I1209 14:28:19.805345 4770 generic.go:334] "Generic (PLEG): container finished" podID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerID="61dd46d8dc8e4106225f5dbf2568106c36247fc2e46766866246d4f184ae031e" exitCode=0 Dec 09 14:28:19 crc kubenswrapper[4770]: I1209 14:28:19.805407 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" event={"ID":"08d594b0-871f-4f3f-9d64-f14f0773be76","Type":"ContainerDied","Data":"61dd46d8dc8e4106225f5dbf2568106c36247fc2e46766866246d4f184ae031e"} Dec 09 14:28:19 crc kubenswrapper[4770]: I1209 14:28:19.806066 4770 scope.go:117] "RemoveContainer" containerID="61dd46d8dc8e4106225f5dbf2568106c36247fc2e46766866246d4f184ae031e" Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.060780 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.114541 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.300717 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.344803 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.363507 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.386204 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.436214 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.812970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" event={"ID":"08d594b0-871f-4f3f-9d64-f14f0773be76","Type":"ContainerStarted","Data":"7028db6b16120d1223d5ef7aa311cf29109299d7d2d87fbad146e77b7a1cc3a3"} Dec 09 14:28:20 crc kubenswrapper[4770]: I1209 14:28:20.905261 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.062520 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.224058 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.324354 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.373874 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.427716 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.504367 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.557766 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.589233 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.817765 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.819982 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:28:21 crc kubenswrapper[4770]: I1209 14:28:21.847568 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 14:28:22 crc kubenswrapper[4770]: I1209 14:28:22.223312 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 14:28:22 crc kubenswrapper[4770]: I1209 14:28:22.278411 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 14:28:22 crc kubenswrapper[4770]: I1209 14:28:22.330665 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 14:28:22 crc kubenswrapper[4770]: I1209 14:28:22.346493 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 14:28:22 crc kubenswrapper[4770]: I1209 14:28:22.359502 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 14:28:22 crc kubenswrapper[4770]: I1209 14:28:22.570855 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 14:28:22 crc kubenswrapper[4770]: I1209 14:28:22.698287 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 14:28:22 crc kubenswrapper[4770]: I1209 14:28:22.740202 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.004886 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.041493 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.404514 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.404522 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.432591 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.571057 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.637602 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.740405 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.748494 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.764239 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.777349 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 14:28:23 crc kubenswrapper[4770]: I1209 14:28:23.982798 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 14:28:24 crc kubenswrapper[4770]: I1209 14:28:24.202251 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 14:28:24 crc kubenswrapper[4770]: I1209 14:28:24.349324 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 14:28:24 crc kubenswrapper[4770]: I1209 14:28:24.476939 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 14:28:24 crc kubenswrapper[4770]: I1209 14:28:24.560620 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 14:28:24 crc kubenswrapper[4770]: I1209 14:28:24.925299 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 14:28:25 crc kubenswrapper[4770]: I1209 14:28:25.013560 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 14:28:25 crc kubenswrapper[4770]: I1209 14:28:25.312613 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 14:28:25 crc kubenswrapper[4770]: I1209 14:28:25.443757 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 14:28:25 crc kubenswrapper[4770]: I1209 14:28:25.452939 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 14:28:25 crc kubenswrapper[4770]: I1209 14:28:25.608349 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 14:28:25 crc kubenswrapper[4770]: I1209 14:28:25.719946 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 14:28:25 crc kubenswrapper[4770]: I1209 14:28:25.867205 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 14:28:25 crc kubenswrapper[4770]: I1209 14:28:25.905260 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.020845 4770 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.215289 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.329699 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.403507 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.420592 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.639421 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.885342 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.930157 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 14:28:26 crc kubenswrapper[4770]: I1209 14:28:26.983332 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 14:28:27 crc kubenswrapper[4770]: I1209 14:28:27.036161 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 14:28:27 crc kubenswrapper[4770]: I1209 14:28:27.139719 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 14:28:27 crc kubenswrapper[4770]: I1209 14:28:27.223365 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 14:28:27 crc kubenswrapper[4770]: I1209 14:28:27.297188 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 14:28:27 crc kubenswrapper[4770]: I1209 14:28:27.346938 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 14:28:27 crc kubenswrapper[4770]: I1209 14:28:27.485413 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 14:28:27 crc kubenswrapper[4770]: I1209 14:28:27.711196 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 14:28:27 crc kubenswrapper[4770]: I1209 14:28:27.782867 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.004369 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.028115 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.056248 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.325687 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.400236 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.730830 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.808864 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.878102 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.887598 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 14:28:28 crc kubenswrapper[4770]: I1209 14:28:28.952168 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.017240 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.020763 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.028512 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.063481 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.136216 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.199206 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.231003 4770 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.231773 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nhncv" podStartSLOduration=68.524274858 podStartE2EDuration="3m10.231729964s" podCreationTimestamp="2025-12-09 14:25:19 +0000 UTC" firstStartedPulling="2025-12-09 14:25:24.400430043 +0000 UTC m=+156.296632179" lastFinishedPulling="2025-12-09 14:27:26.107885139 +0000 UTC m=+278.004087285" observedRunningTime="2025-12-09 14:27:41.062453242 +0000 UTC m=+292.958655408" watchObservedRunningTime="2025-12-09 14:28:29.231729964 +0000 UTC m=+341.127932100" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.233064 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ks2lq" podStartSLOduration=65.777059481 podStartE2EDuration="3m6.233055261s" podCreationTimestamp="2025-12-09 14:25:23 +0000 UTC" firstStartedPulling="2025-12-09 14:25:25.527970932 +0000 UTC m=+157.424173068" lastFinishedPulling="2025-12-09 14:27:25.983966702 +0000 UTC m=+277.880168848" observedRunningTime="2025-12-09 14:27:41.174187446 +0000 UTC m=+293.070389572" watchObservedRunningTime="2025-12-09 14:28:29.233055261 +0000 UTC m=+341.129257397" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.233232 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kfhcl" podStartSLOduration=68.653960768 podStartE2EDuration="3m10.233226356s" podCreationTimestamp="2025-12-09 14:25:19 +0000 UTC" firstStartedPulling="2025-12-09 14:25:24.406289058 +0000 UTC m=+156.302491194" lastFinishedPulling="2025-12-09 14:27:25.985554646 +0000 UTC m=+277.881756782" observedRunningTime="2025-12-09 14:27:41.252053264 +0000 UTC m=+293.148255400" watchObservedRunningTime="2025-12-09 14:28:29.233226356 +0000 UTC m=+341.129428492" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.233325 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cqdll" podStartSLOduration=66.670582035 podStartE2EDuration="3m8.233320389s" podCreationTimestamp="2025-12-09 14:25:21 +0000 UTC" firstStartedPulling="2025-12-09 14:25:24.423708788 +0000 UTC m=+156.319910924" lastFinishedPulling="2025-12-09 14:27:25.986447142 +0000 UTC m=+277.882649278" observedRunningTime="2025-12-09 14:27:41.211615883 +0000 UTC m=+293.107818039" watchObservedRunningTime="2025-12-09 14:28:29.233320389 +0000 UTC m=+341.129522525" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.234400 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-97p4f" podStartSLOduration=65.977522081 podStartE2EDuration="3m6.234393689s" podCreationTimestamp="2025-12-09 14:25:23 +0000 UTC" firstStartedPulling="2025-12-09 14:25:25.515160612 +0000 UTC m=+157.411362748" lastFinishedPulling="2025-12-09 14:27:25.77203222 +0000 UTC m=+277.668234356" observedRunningTime="2025-12-09 14:27:41.154539841 +0000 UTC m=+293.050742067" watchObservedRunningTime="2025-12-09 14:28:29.234393689 +0000 UTC m=+341.130595825" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.235818 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jqg9c" podStartSLOduration=65.559349336 podStartE2EDuration="3m8.235811549s" podCreationTimestamp="2025-12-09 14:25:21 +0000 UTC" firstStartedPulling="2025-12-09 14:25:23.309075112 +0000 UTC m=+155.205277248" lastFinishedPulling="2025-12-09 14:27:25.985537325 +0000 UTC m=+277.881739461" observedRunningTime="2025-12-09 14:27:41.134802454 +0000 UTC m=+293.031004620" watchObservedRunningTime="2025-12-09 14:28:29.235811549 +0000 UTC m=+341.132013685" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.239021 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-8lkmb","openshift-authentication/oauth-openshift-558db77b4-22k76"] Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.239138 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.247303 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.264818 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=51.264800192 podStartE2EDuration="51.264800192s" podCreationTimestamp="2025-12-09 14:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:28:29.260493121 +0000 UTC m=+341.156695257" watchObservedRunningTime="2025-12-09 14:28:29.264800192 +0000 UTC m=+341.161002328" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.343832 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.355435 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.379864 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.418058 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.530923 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.531947 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.754039 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 14:28:29 crc kubenswrapper[4770]: I1209 14:28:29.934861 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.028612 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.111353 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.226487 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.432938 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.440548 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.480639 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.498839 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.596877 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" path="/var/lib/kubelet/pods/008593f8-0fc2-4138-abc9-ca200aef7426/volumes" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.598364 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" path="/var/lib/kubelet/pods/46b8b2a0-18e8-428c-a1a1-5494acf3ddb6/volumes" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.685290 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.712992 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5fff9cfc9d-d525c"] Dec 09 14:28:30 crc kubenswrapper[4770]: E1209 14:28:30.713212 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="extract-utilities" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713223 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="extract-utilities" Dec 09 14:28:30 crc kubenswrapper[4770]: E1209 14:28:30.713238 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="registry-server" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713243 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="registry-server" Dec 09 14:28:30 crc kubenswrapper[4770]: E1209 14:28:30.713255 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" containerName="oauth-openshift" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713261 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" containerName="oauth-openshift" Dec 09 14:28:30 crc kubenswrapper[4770]: E1209 14:28:30.713271 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="extract-content" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713276 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="extract-content" Dec 09 14:28:30 crc kubenswrapper[4770]: E1209 14:28:30.713287 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" containerName="installer" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713294 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" containerName="installer" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713389 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="281a8b39-472a-418d-ae38-157d4f2710e4" containerName="installer" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713397 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b8b2a0-18e8-428c-a1a1-5494acf3ddb6" containerName="oauth-openshift" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713413 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="008593f8-0fc2-4138-abc9-ca200aef7426" containerName="registry-server" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.713880 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.717373 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.717774 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.717819 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.717844 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.717900 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.718158 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.718231 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.718305 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.718643 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.719153 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.719322 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.719440 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.726282 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5fff9cfc9d-d525c"] Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.731772 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.733212 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.735557 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.813990 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-session\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.814347 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.814368 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.814427 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.814447 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.814491 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.814516 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-login\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.814561 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-audit-policies\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.814685 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2074218a-8db0-4348-817b-d00026c869d2-audit-dir\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.815044 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7zd\" (UniqueName: \"kubernetes.io/projected/2074218a-8db0-4348-817b-d00026c869d2-kube-api-access-cn7zd\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.815212 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.815475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.815541 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-error\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.815577 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.894431 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.905267 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916691 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916760 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916791 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916823 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-login\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916877 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-audit-policies\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916903 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2074218a-8db0-4348-817b-d00026c869d2-audit-dir\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916942 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn7zd\" (UniqueName: \"kubernetes.io/projected/2074218a-8db0-4348-817b-d00026c869d2-kube-api-access-cn7zd\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916967 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.916997 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.917010 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2074218a-8db0-4348-817b-d00026c869d2-audit-dir\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.917027 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-error\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.917051 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.917078 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-session\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.917548 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.917674 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-audit-policies\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.918116 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.918185 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.923862 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.923940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-login\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.924049 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.924129 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-session\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.924326 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.925313 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.927324 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-user-template-error\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.927353 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2074218a-8db0-4348-817b-d00026c869d2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:30 crc kubenswrapper[4770]: I1209 14:28:30.939433 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn7zd\" (UniqueName: \"kubernetes.io/projected/2074218a-8db0-4348-817b-d00026c869d2-kube-api-access-cn7zd\") pod \"oauth-openshift-5fff9cfc9d-d525c\" (UID: \"2074218a-8db0-4348-817b-d00026c869d2\") " pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.012477 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.034604 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.136928 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.348888 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.389105 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.527167 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.613336 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.703258 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.727383 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.733085 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 14:28:31 crc kubenswrapper[4770]: I1209 14:28:31.780945 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.032018 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.074506 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.206363 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.279176 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.285349 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.534106 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.547194 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.547365 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.608690 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.608746 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.614491 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.696517 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.769591 4770 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.779068 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.793899 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.846854 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 14:28:32 crc kubenswrapper[4770]: I1209 14:28:32.889276 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:28:33 crc kubenswrapper[4770]: I1209 14:28:33.058856 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 14:28:33 crc kubenswrapper[4770]: I1209 14:28:33.072519 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 14:28:33 crc kubenswrapper[4770]: I1209 14:28:33.250438 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 14:28:33 crc kubenswrapper[4770]: I1209 14:28:33.283630 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 14:28:33 crc kubenswrapper[4770]: I1209 14:28:33.733748 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 14:28:33 crc kubenswrapper[4770]: I1209 14:28:33.847950 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 14:28:33 crc kubenswrapper[4770]: E1209 14:28:33.910902 4770 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 09 14:28:33 crc kubenswrapper[4770]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fff9cfc9d-d525c_openshift-authentication_2074218a-8db0-4348-817b-d00026c869d2_0(27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0): error adding pod openshift-authentication_oauth-openshift-5fff9cfc9d-d525c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0" Netns:"/var/run/netns/6d5cc320-63d0-4ded-b57f-0039f5cf74dc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5fff9cfc9d-d525c;K8S_POD_INFRA_CONTAINER_ID=27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0;K8S_POD_UID=2074218a-8db0-4348-817b-d00026c869d2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c] networking: Multus: [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c/2074218a-8db0-4348-817b-d00026c869d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5fff9cfc9d-d525c in out of cluster comm: pod "oauth-openshift-5fff9cfc9d-d525c" not found Dec 09 14:28:33 crc kubenswrapper[4770]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 14:28:33 crc kubenswrapper[4770]: > Dec 09 14:28:33 crc kubenswrapper[4770]: E1209 14:28:33.911048 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 09 14:28:33 crc kubenswrapper[4770]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fff9cfc9d-d525c_openshift-authentication_2074218a-8db0-4348-817b-d00026c869d2_0(27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0): error adding pod openshift-authentication_oauth-openshift-5fff9cfc9d-d525c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0" Netns:"/var/run/netns/6d5cc320-63d0-4ded-b57f-0039f5cf74dc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5fff9cfc9d-d525c;K8S_POD_INFRA_CONTAINER_ID=27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0;K8S_POD_UID=2074218a-8db0-4348-817b-d00026c869d2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c] networking: Multus: [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c/2074218a-8db0-4348-817b-d00026c869d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5fff9cfc9d-d525c in out of cluster comm: pod "oauth-openshift-5fff9cfc9d-d525c" not found Dec 09 14:28:33 crc kubenswrapper[4770]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 14:28:33 crc kubenswrapper[4770]: > pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:33 crc kubenswrapper[4770]: E1209 14:28:33.911087 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 09 14:28:33 crc kubenswrapper[4770]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fff9cfc9d-d525c_openshift-authentication_2074218a-8db0-4348-817b-d00026c869d2_0(27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0): error adding pod openshift-authentication_oauth-openshift-5fff9cfc9d-d525c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0" Netns:"/var/run/netns/6d5cc320-63d0-4ded-b57f-0039f5cf74dc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5fff9cfc9d-d525c;K8S_POD_INFRA_CONTAINER_ID=27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0;K8S_POD_UID=2074218a-8db0-4348-817b-d00026c869d2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c] networking: Multus: [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c/2074218a-8db0-4348-817b-d00026c869d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5fff9cfc9d-d525c in out of cluster comm: pod "oauth-openshift-5fff9cfc9d-d525c" not found Dec 09 14:28:33 crc kubenswrapper[4770]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 14:28:33 crc kubenswrapper[4770]: > pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:33 crc kubenswrapper[4770]: E1209 14:28:33.911191 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5fff9cfc9d-d525c_openshift-authentication(2074218a-8db0-4348-817b-d00026c869d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5fff9cfc9d-d525c_openshift-authentication(2074218a-8db0-4348-817b-d00026c869d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fff9cfc9d-d525c_openshift-authentication_2074218a-8db0-4348-817b-d00026c869d2_0(27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0): error adding pod openshift-authentication_oauth-openshift-5fff9cfc9d-d525c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0\\\" Netns:\\\"/var/run/netns/6d5cc320-63d0-4ded-b57f-0039f5cf74dc\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5fff9cfc9d-d525c;K8S_POD_INFRA_CONTAINER_ID=27b74ed35df5c634fa03cec56ecf7ad4aebd8c280cd538f2567a56c5c27addc0;K8S_POD_UID=2074218a-8db0-4348-817b-d00026c869d2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c] networking: Multus: [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c/2074218a-8db0-4348-817b-d00026c869d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5fff9cfc9d-d525c in out of cluster comm: pod \\\"oauth-openshift-5fff9cfc9d-d525c\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" podUID="2074218a-8db0-4348-817b-d00026c869d2" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.045873 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.133412 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.516903 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.530143 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.544162 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.607937 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.643880 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.802844 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.902882 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.903727 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:34 crc kubenswrapper[4770]: I1209 14:28:34.944358 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 14:28:35 crc kubenswrapper[4770]: I1209 14:28:35.070166 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 14:28:35 crc kubenswrapper[4770]: I1209 14:28:35.125161 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 14:28:35 crc kubenswrapper[4770]: I1209 14:28:35.278094 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 14:28:35 crc kubenswrapper[4770]: I1209 14:28:35.403535 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 14:28:35 crc kubenswrapper[4770]: I1209 14:28:35.523428 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 14:28:35 crc kubenswrapper[4770]: I1209 14:28:35.673891 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 14:28:36 crc kubenswrapper[4770]: I1209 14:28:36.197112 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 14:28:36 crc kubenswrapper[4770]: I1209 14:28:36.302315 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 14:28:36 crc kubenswrapper[4770]: I1209 14:28:36.405269 4770 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:28:36 crc kubenswrapper[4770]: I1209 14:28:36.405522 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2487b15a66bc07f6c5fb071f17ddc95e107953c11ecef760cc5702f730da253a" gracePeriod=5 Dec 09 14:28:36 crc kubenswrapper[4770]: I1209 14:28:36.557624 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 14:28:36 crc kubenswrapper[4770]: I1209 14:28:36.713388 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 14:28:36 crc kubenswrapper[4770]: I1209 14:28:36.854478 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 14:28:37 crc kubenswrapper[4770]: I1209 14:28:37.003787 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 14:28:37 crc kubenswrapper[4770]: I1209 14:28:37.265855 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 14:28:37 crc kubenswrapper[4770]: I1209 14:28:37.544253 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 14:28:37 crc kubenswrapper[4770]: I1209 14:28:37.626005 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 14:28:37 crc kubenswrapper[4770]: E1209 14:28:37.830985 4770 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 09 14:28:37 crc kubenswrapper[4770]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fff9cfc9d-d525c_openshift-authentication_2074218a-8db0-4348-817b-d00026c869d2_0(80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da): error adding pod openshift-authentication_oauth-openshift-5fff9cfc9d-d525c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da" Netns:"/var/run/netns/01c46989-a83a-4e37-88bb-d22ce67f3521" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5fff9cfc9d-d525c;K8S_POD_INFRA_CONTAINER_ID=80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da;K8S_POD_UID=2074218a-8db0-4348-817b-d00026c869d2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c] networking: Multus: [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c/2074218a-8db0-4348-817b-d00026c869d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5fff9cfc9d-d525c in out of cluster comm: pod "oauth-openshift-5fff9cfc9d-d525c" not found Dec 09 14:28:37 crc kubenswrapper[4770]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 14:28:37 crc kubenswrapper[4770]: > Dec 09 14:28:37 crc kubenswrapper[4770]: E1209 14:28:37.831087 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 09 14:28:37 crc kubenswrapper[4770]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fff9cfc9d-d525c_openshift-authentication_2074218a-8db0-4348-817b-d00026c869d2_0(80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da): error adding pod openshift-authentication_oauth-openshift-5fff9cfc9d-d525c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da" Netns:"/var/run/netns/01c46989-a83a-4e37-88bb-d22ce67f3521" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5fff9cfc9d-d525c;K8S_POD_INFRA_CONTAINER_ID=80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da;K8S_POD_UID=2074218a-8db0-4348-817b-d00026c869d2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c] networking: Multus: [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c/2074218a-8db0-4348-817b-d00026c869d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5fff9cfc9d-d525c in out of cluster comm: pod "oauth-openshift-5fff9cfc9d-d525c" not found Dec 09 14:28:37 crc kubenswrapper[4770]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 14:28:37 crc kubenswrapper[4770]: > pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:37 crc kubenswrapper[4770]: E1209 14:28:37.831114 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 09 14:28:37 crc kubenswrapper[4770]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fff9cfc9d-d525c_openshift-authentication_2074218a-8db0-4348-817b-d00026c869d2_0(80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da): error adding pod openshift-authentication_oauth-openshift-5fff9cfc9d-d525c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da" Netns:"/var/run/netns/01c46989-a83a-4e37-88bb-d22ce67f3521" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5fff9cfc9d-d525c;K8S_POD_INFRA_CONTAINER_ID=80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da;K8S_POD_UID=2074218a-8db0-4348-817b-d00026c869d2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c] networking: Multus: [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c/2074218a-8db0-4348-817b-d00026c869d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5fff9cfc9d-d525c in out of cluster comm: pod "oauth-openshift-5fff9cfc9d-d525c" not found Dec 09 14:28:37 crc kubenswrapper[4770]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 14:28:37 crc kubenswrapper[4770]: > pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:37 crc kubenswrapper[4770]: E1209 14:28:37.831188 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5fff9cfc9d-d525c_openshift-authentication(2074218a-8db0-4348-817b-d00026c869d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5fff9cfc9d-d525c_openshift-authentication(2074218a-8db0-4348-817b-d00026c869d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fff9cfc9d-d525c_openshift-authentication_2074218a-8db0-4348-817b-d00026c869d2_0(80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da): error adding pod openshift-authentication_oauth-openshift-5fff9cfc9d-d525c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da\\\" Netns:\\\"/var/run/netns/01c46989-a83a-4e37-88bb-d22ce67f3521\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5fff9cfc9d-d525c;K8S_POD_INFRA_CONTAINER_ID=80675fb4247996955b4053517e115d3ece811f018446578d82ca0c220a2c85da;K8S_POD_UID=2074218a-8db0-4348-817b-d00026c869d2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c] networking: Multus: [openshift-authentication/oauth-openshift-5fff9cfc9d-d525c/2074218a-8db0-4348-817b-d00026c869d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5fff9cfc9d-d525c in out of cluster comm: pod \\\"oauth-openshift-5fff9cfc9d-d525c\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" podUID="2074218a-8db0-4348-817b-d00026c869d2" Dec 09 14:28:37 crc kubenswrapper[4770]: I1209 14:28:37.848529 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 14:28:38 crc kubenswrapper[4770]: I1209 14:28:38.013077 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 14:28:38 crc kubenswrapper[4770]: I1209 14:28:38.420111 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 14:28:38 crc kubenswrapper[4770]: I1209 14:28:38.454185 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 14:28:38 crc kubenswrapper[4770]: I1209 14:28:38.585101 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 14:28:38 crc kubenswrapper[4770]: I1209 14:28:38.746679 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 14:28:39 crc kubenswrapper[4770]: I1209 14:28:39.121535 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 14:28:39 crc kubenswrapper[4770]: I1209 14:28:39.281873 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 14:28:39 crc kubenswrapper[4770]: I1209 14:28:39.723889 4770 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 14:28:39 crc kubenswrapper[4770]: I1209 14:28:39.791679 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 14:28:40 crc kubenswrapper[4770]: I1209 14:28:40.253111 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 14:28:40 crc kubenswrapper[4770]: I1209 14:28:40.292368 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 14:28:40 crc kubenswrapper[4770]: I1209 14:28:40.345749 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 14:28:40 crc kubenswrapper[4770]: I1209 14:28:40.374169 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 14:28:40 crc kubenswrapper[4770]: I1209 14:28:40.650675 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 14:28:40 crc kubenswrapper[4770]: I1209 14:28:40.954291 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 14:28:40 crc kubenswrapper[4770]: I1209 14:28:40.954543 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 14:28:40 crc kubenswrapper[4770]: I1209 14:28:40.958352 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 14:28:41 crc kubenswrapper[4770]: I1209 14:28:41.947419 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 14:28:41 crc kubenswrapper[4770]: I1209 14:28:41.947490 4770 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2487b15a66bc07f6c5fb071f17ddc95e107953c11ecef760cc5702f730da253a" exitCode=137 Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.008354 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.009380 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093114 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093169 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093197 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093226 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093300 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093602 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093642 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093669 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.093690 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.103512 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.195089 4770 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.195131 4770 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.195140 4770 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.195151 4770 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.195161 4770 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.437201 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.448593 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.596280 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.955898 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.956010 4770 scope.go:117] "RemoveContainer" containerID="2487b15a66bc07f6c5fb071f17ddc95e107953c11ecef760cc5702f730da253a" Dec 09 14:28:42 crc kubenswrapper[4770]: I1209 14:28:42.956076 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:28:43 crc kubenswrapper[4770]: I1209 14:28:43.081719 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 14:28:43 crc kubenswrapper[4770]: I1209 14:28:43.726983 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 14:28:44 crc kubenswrapper[4770]: I1209 14:28:44.243589 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:28:44 crc kubenswrapper[4770]: I1209 14:28:44.244212 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:28:44 crc kubenswrapper[4770]: I1209 14:28:44.531188 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 14:28:48 crc kubenswrapper[4770]: I1209 14:28:48.597321 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:48 crc kubenswrapper[4770]: I1209 14:28:48.598506 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:48 crc kubenswrapper[4770]: I1209 14:28:48.880583 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5fff9cfc9d-d525c"] Dec 09 14:28:49 crc kubenswrapper[4770]: I1209 14:28:49.001677 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" event={"ID":"2074218a-8db0-4348-817b-d00026c869d2","Type":"ContainerStarted","Data":"e97f5d47756d4cd971b62e67f540b5cefe373efc4640bdb6a8c23a910fae7640"} Dec 09 14:28:50 crc kubenswrapper[4770]: I1209 14:28:50.009484 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" event={"ID":"2074218a-8db0-4348-817b-d00026c869d2","Type":"ContainerStarted","Data":"e66b121290f40efa8341899c65d884364ff7d9bfe3e2849cc0e29e83a3297552"} Dec 09 14:28:50 crc kubenswrapper[4770]: I1209 14:28:50.010031 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:50 crc kubenswrapper[4770]: I1209 14:28:50.015363 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" Dec 09 14:28:50 crc kubenswrapper[4770]: I1209 14:28:50.035697 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5fff9cfc9d-d525c" podStartSLOduration=134.03568044 podStartE2EDuration="2m14.03568044s" podCreationTimestamp="2025-12-09 14:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:28:50.03422915 +0000 UTC m=+361.930431316" watchObservedRunningTime="2025-12-09 14:28:50.03568044 +0000 UTC m=+361.931882576" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.436627 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mw288"] Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.438404 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" podUID="c2c6e492-2de2-4b7c-bc62-a3396a49b56e" containerName="controller-manager" containerID="cri-o://efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449" gracePeriod=30 Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.553117 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph"] Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.553997 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" podUID="49d5b890-581d-4f7a-9811-2f011513994f" containerName="route-controller-manager" containerID="cri-o://c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851" gracePeriod=30 Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.797120 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.837487 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-proxy-ca-bundles\") pod \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.837544 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-config\") pod \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.837565 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-serving-cert\") pod \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.837612 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-client-ca\") pod \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.837680 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbcdm\" (UniqueName: \"kubernetes.io/projected/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-kube-api-access-cbcdm\") pod \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\" (UID: \"c2c6e492-2de2-4b7c-bc62-a3396a49b56e\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.838998 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c2c6e492-2de2-4b7c-bc62-a3396a49b56e" (UID: "c2c6e492-2de2-4b7c-bc62-a3396a49b56e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.839182 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-config" (OuterVolumeSpecName: "config") pod "c2c6e492-2de2-4b7c-bc62-a3396a49b56e" (UID: "c2c6e492-2de2-4b7c-bc62-a3396a49b56e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.839593 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-client-ca" (OuterVolumeSpecName: "client-ca") pod "c2c6e492-2de2-4b7c-bc62-a3396a49b56e" (UID: "c2c6e492-2de2-4b7c-bc62-a3396a49b56e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.844893 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-kube-api-access-cbcdm" (OuterVolumeSpecName: "kube-api-access-cbcdm") pod "c2c6e492-2de2-4b7c-bc62-a3396a49b56e" (UID: "c2c6e492-2de2-4b7c-bc62-a3396a49b56e"). InnerVolumeSpecName "kube-api-access-cbcdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.844949 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c2c6e492-2de2-4b7c-bc62-a3396a49b56e" (UID: "c2c6e492-2de2-4b7c-bc62-a3396a49b56e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.893973 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939210 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-client-ca\") pod \"49d5b890-581d-4f7a-9811-2f011513994f\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939270 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-config\") pod \"49d5b890-581d-4f7a-9811-2f011513994f\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939306 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d5b890-581d-4f7a-9811-2f011513994f-serving-cert\") pod \"49d5b890-581d-4f7a-9811-2f011513994f\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939350 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx6hw\" (UniqueName: \"kubernetes.io/projected/49d5b890-581d-4f7a-9811-2f011513994f-kube-api-access-fx6hw\") pod \"49d5b890-581d-4f7a-9811-2f011513994f\" (UID: \"49d5b890-581d-4f7a-9811-2f011513994f\") " Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939602 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939619 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939630 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939642 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.939654 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbcdm\" (UniqueName: \"kubernetes.io/projected/c2c6e492-2de2-4b7c-bc62-a3396a49b56e-kube-api-access-cbcdm\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.943426 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-client-ca" (OuterVolumeSpecName: "client-ca") pod "49d5b890-581d-4f7a-9811-2f011513994f" (UID: "49d5b890-581d-4f7a-9811-2f011513994f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.943513 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-config" (OuterVolumeSpecName: "config") pod "49d5b890-581d-4f7a-9811-2f011513994f" (UID: "49d5b890-581d-4f7a-9811-2f011513994f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.945518 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d5b890-581d-4f7a-9811-2f011513994f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "49d5b890-581d-4f7a-9811-2f011513994f" (UID: "49d5b890-581d-4f7a-9811-2f011513994f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:28:58 crc kubenswrapper[4770]: I1209 14:28:58.947417 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d5b890-581d-4f7a-9811-2f011513994f-kube-api-access-fx6hw" (OuterVolumeSpecName: "kube-api-access-fx6hw") pod "49d5b890-581d-4f7a-9811-2f011513994f" (UID: "49d5b890-581d-4f7a-9811-2f011513994f"). InnerVolumeSpecName "kube-api-access-fx6hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.040491 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.040532 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d5b890-581d-4f7a-9811-2f011513994f-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.040541 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d5b890-581d-4f7a-9811-2f011513994f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.040551 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx6hw\" (UniqueName: \"kubernetes.io/projected/49d5b890-581d-4f7a-9811-2f011513994f-kube-api-access-fx6hw\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.063145 4770 generic.go:334] "Generic (PLEG): container finished" podID="c2c6e492-2de2-4b7c-bc62-a3396a49b56e" containerID="efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449" exitCode=0 Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.063248 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" event={"ID":"c2c6e492-2de2-4b7c-bc62-a3396a49b56e","Type":"ContainerDied","Data":"efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449"} Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.063346 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.063766 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mw288" event={"ID":"c2c6e492-2de2-4b7c-bc62-a3396a49b56e","Type":"ContainerDied","Data":"e23604394d6c9b3674b789917e932c10e0134b722b8988cec03272aecfd32c8a"} Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.063825 4770 scope.go:117] "RemoveContainer" containerID="efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.066937 4770 generic.go:334] "Generic (PLEG): container finished" podID="49d5b890-581d-4f7a-9811-2f011513994f" containerID="c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851" exitCode=0 Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.066977 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.067021 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" event={"ID":"49d5b890-581d-4f7a-9811-2f011513994f","Type":"ContainerDied","Data":"c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851"} Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.067047 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph" event={"ID":"49d5b890-581d-4f7a-9811-2f011513994f","Type":"ContainerDied","Data":"ffddceda7715a7f980111872fa80c7aa9d66ed99c5d0cdecd77a0b5305da1951"} Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.090638 4770 scope.go:117] "RemoveContainer" containerID="efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449" Dec 09 14:28:59 crc kubenswrapper[4770]: E1209 14:28:59.092982 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449\": container with ID starting with efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449 not found: ID does not exist" containerID="efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.093075 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449"} err="failed to get container status \"efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449\": rpc error: code = NotFound desc = could not find container \"efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449\": container with ID starting with efc56f320e19837c30ad637ccc96478dae72bd3f4d6204856ec1872553c9b449 not found: ID does not exist" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.093116 4770 scope.go:117] "RemoveContainer" containerID="c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.100307 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mw288"] Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.106851 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mw288"] Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.111414 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph"] Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.115371 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-492ph"] Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.115698 4770 scope.go:117] "RemoveContainer" containerID="c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851" Dec 09 14:28:59 crc kubenswrapper[4770]: E1209 14:28:59.116630 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851\": container with ID starting with c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851 not found: ID does not exist" containerID="c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851" Dec 09 14:28:59 crc kubenswrapper[4770]: I1209 14:28:59.116678 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851"} err="failed to get container status \"c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851\": rpc error: code = NotFound desc = could not find container \"c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851\": container with ID starting with c2740b2182835db5f2e19d64c26ef67d0e4d37cb8933dc35b9bdf7e174414851 not found: ID does not exist" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.409781 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv"] Dec 09 14:29:00 crc kubenswrapper[4770]: E1209 14:29:00.410046 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d5b890-581d-4f7a-9811-2f011513994f" containerName="route-controller-manager" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.410061 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d5b890-581d-4f7a-9811-2f011513994f" containerName="route-controller-manager" Dec 09 14:29:00 crc kubenswrapper[4770]: E1209 14:29:00.410082 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c6e492-2de2-4b7c-bc62-a3396a49b56e" containerName="controller-manager" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.410090 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c6e492-2de2-4b7c-bc62-a3396a49b56e" containerName="controller-manager" Dec 09 14:29:00 crc kubenswrapper[4770]: E1209 14:29:00.410102 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.410109 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.410233 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.410250 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d5b890-581d-4f7a-9811-2f011513994f" containerName="route-controller-manager" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.410264 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c6e492-2de2-4b7c-bc62-a3396a49b56e" containerName="controller-manager" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.410693 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.414554 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.415069 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.415471 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.418652 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.418904 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp"] Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.419780 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.420502 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.427840 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.428499 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.430215 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.430223 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.430529 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.430704 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.430907 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.431062 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.436836 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv"] Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.447889 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp"] Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.458702 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c44l\" (UniqueName: \"kubernetes.io/projected/b8e2cc36-a072-453a-989c-ee5daf63d7be-kube-api-access-7c44l\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.458829 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-config\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.458943 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmpz\" (UniqueName: \"kubernetes.io/projected/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-kube-api-access-fzmpz\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.459050 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-proxy-ca-bundles\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.459102 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e2cc36-a072-453a-989c-ee5daf63d7be-serving-cert\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.459243 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-config\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.459332 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-client-ca\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.459431 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-client-ca\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.459639 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-serving-cert\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.561457 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-serving-cert\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.562544 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c44l\" (UniqueName: \"kubernetes.io/projected/b8e2cc36-a072-453a-989c-ee5daf63d7be-kube-api-access-7c44l\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.562592 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-config\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.562640 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzmpz\" (UniqueName: \"kubernetes.io/projected/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-kube-api-access-fzmpz\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.562693 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-proxy-ca-bundles\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.562773 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e2cc36-a072-453a-989c-ee5daf63d7be-serving-cert\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.562821 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-config\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.562857 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-client-ca\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.562899 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-client-ca\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.564303 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-proxy-ca-bundles\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.564417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-client-ca\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.564443 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-config\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.564773 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-client-ca\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.565140 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-config\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.569522 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e2cc36-a072-453a-989c-ee5daf63d7be-serving-cert\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.575624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-serving-cert\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.579231 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c44l\" (UniqueName: \"kubernetes.io/projected/b8e2cc36-a072-453a-989c-ee5daf63d7be-kube-api-access-7c44l\") pod \"route-controller-manager-6969c4b58c-4rxnp\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.583549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzmpz\" (UniqueName: \"kubernetes.io/projected/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-kube-api-access-fzmpz\") pod \"controller-manager-6b856cbcf7-4vvnv\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.596476 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d5b890-581d-4f7a-9811-2f011513994f" path="/var/lib/kubelet/pods/49d5b890-581d-4f7a-9811-2f011513994f/volumes" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.597622 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c6e492-2de2-4b7c-bc62-a3396a49b56e" path="/var/lib/kubelet/pods/c2c6e492-2de2-4b7c-bc62-a3396a49b56e/volumes" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.736560 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:00 crc kubenswrapper[4770]: I1209 14:29:00.748010 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:01 crc kubenswrapper[4770]: I1209 14:29:01.046411 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv"] Dec 09 14:29:01 crc kubenswrapper[4770]: I1209 14:29:01.130248 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" event={"ID":"6740a9b1-550d-48ab-8853-bbc5ea7e47e9","Type":"ContainerStarted","Data":"e809ebc9fb6c075fbe75488707abff25be3728247aeaed3756a55de300e47cb6"} Dec 09 14:29:01 crc kubenswrapper[4770]: I1209 14:29:01.239970 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp"] Dec 09 14:29:01 crc kubenswrapper[4770]: W1209 14:29:01.243385 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e2cc36_a072_453a_989c_ee5daf63d7be.slice/crio-b4b33134b78b4dfab578bd9523fae57083d4eca6c80bc1a9641a0c1231c3c6e2 WatchSource:0}: Error finding container b4b33134b78b4dfab578bd9523fae57083d4eca6c80bc1a9641a0c1231c3c6e2: Status 404 returned error can't find the container with id b4b33134b78b4dfab578bd9523fae57083d4eca6c80bc1a9641a0c1231c3c6e2 Dec 09 14:29:01 crc kubenswrapper[4770]: I1209 14:29:01.616627 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv"] Dec 09 14:29:01 crc kubenswrapper[4770]: I1209 14:29:01.688149 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp"] Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.136576 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" event={"ID":"b8e2cc36-a072-453a-989c-ee5daf63d7be","Type":"ContainerStarted","Data":"c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25"} Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.137082 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" event={"ID":"b8e2cc36-a072-453a-989c-ee5daf63d7be","Type":"ContainerStarted","Data":"b4b33134b78b4dfab578bd9523fae57083d4eca6c80bc1a9641a0c1231c3c6e2"} Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.137100 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.138009 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" event={"ID":"6740a9b1-550d-48ab-8853-bbc5ea7e47e9","Type":"ContainerStarted","Data":"eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9"} Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.138283 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.141299 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.144362 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.157438 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" podStartSLOduration=4.15741709 podStartE2EDuration="4.15741709s" podCreationTimestamp="2025-12-09 14:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:29:02.15351459 +0000 UTC m=+374.049716726" watchObservedRunningTime="2025-12-09 14:29:02.15741709 +0000 UTC m=+374.053619236" Dec 09 14:29:02 crc kubenswrapper[4770]: I1209 14:29:02.179779 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" podStartSLOduration=4.179754157 podStartE2EDuration="4.179754157s" podCreationTimestamp="2025-12-09 14:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:29:02.174613053 +0000 UTC m=+374.070815179" watchObservedRunningTime="2025-12-09 14:29:02.179754157 +0000 UTC m=+374.075956313" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.144176 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" podUID="b8e2cc36-a072-453a-989c-ee5daf63d7be" containerName="route-controller-manager" containerID="cri-o://c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25" gracePeriod=30 Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.144471 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" podUID="6740a9b1-550d-48ab-8853-bbc5ea7e47e9" containerName="controller-manager" containerID="cri-o://eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9" gracePeriod=30 Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.599592 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.635343 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss"] Dec 09 14:29:03 crc kubenswrapper[4770]: E1209 14:29:03.635609 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e2cc36-a072-453a-989c-ee5daf63d7be" containerName="route-controller-manager" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.635627 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e2cc36-a072-453a-989c-ee5daf63d7be" containerName="route-controller-manager" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.635788 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e2cc36-a072-453a-989c-ee5daf63d7be" containerName="route-controller-manager" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.636256 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.656353 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss"] Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.689387 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.708565 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-client-ca\") pod \"b8e2cc36-a072-453a-989c-ee5daf63d7be\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.708657 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-config\") pod \"b8e2cc36-a072-453a-989c-ee5daf63d7be\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.708688 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e2cc36-a072-453a-989c-ee5daf63d7be-serving-cert\") pod \"b8e2cc36-a072-453a-989c-ee5daf63d7be\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.708897 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c44l\" (UniqueName: \"kubernetes.io/projected/b8e2cc36-a072-453a-989c-ee5daf63d7be-kube-api-access-7c44l\") pod \"b8e2cc36-a072-453a-989c-ee5daf63d7be\" (UID: \"b8e2cc36-a072-453a-989c-ee5daf63d7be\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.709127 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-serving-cert\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.709161 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khp9l\" (UniqueName: \"kubernetes.io/projected/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-kube-api-access-khp9l\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.709186 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-config\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.709211 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-client-ca\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.709568 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8e2cc36-a072-453a-989c-ee5daf63d7be" (UID: "b8e2cc36-a072-453a-989c-ee5daf63d7be"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.710426 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-config" (OuterVolumeSpecName: "config") pod "b8e2cc36-a072-453a-989c-ee5daf63d7be" (UID: "b8e2cc36-a072-453a-989c-ee5daf63d7be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.717003 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e2cc36-a072-453a-989c-ee5daf63d7be-kube-api-access-7c44l" (OuterVolumeSpecName: "kube-api-access-7c44l") pod "b8e2cc36-a072-453a-989c-ee5daf63d7be" (UID: "b8e2cc36-a072-453a-989c-ee5daf63d7be"). InnerVolumeSpecName "kube-api-access-7c44l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.717938 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e2cc36-a072-453a-989c-ee5daf63d7be-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8e2cc36-a072-453a-989c-ee5daf63d7be" (UID: "b8e2cc36-a072-453a-989c-ee5daf63d7be"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.810053 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-proxy-ca-bundles\") pod \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.810550 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-serving-cert\") pod \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.810500 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6740a9b1-550d-48ab-8853-bbc5ea7e47e9" (UID: "6740a9b1-550d-48ab-8853-bbc5ea7e47e9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.810987 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-client-ca\") pod \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811029 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzmpz\" (UniqueName: \"kubernetes.io/projected/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-kube-api-access-fzmpz\") pod \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811051 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-config\") pod \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\" (UID: \"6740a9b1-550d-48ab-8853-bbc5ea7e47e9\") " Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811262 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-serving-cert\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811292 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khp9l\" (UniqueName: \"kubernetes.io/projected/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-kube-api-access-khp9l\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811310 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-config\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811327 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-client-ca\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811363 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c44l\" (UniqueName: \"kubernetes.io/projected/b8e2cc36-a072-453a-989c-ee5daf63d7be-kube-api-access-7c44l\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811373 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811382 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811391 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e2cc36-a072-453a-989c-ee5daf63d7be-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.811400 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e2cc36-a072-453a-989c-ee5daf63d7be-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.812872 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-client-ca\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.813293 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-config\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.814188 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "6740a9b1-550d-48ab-8853-bbc5ea7e47e9" (UID: "6740a9b1-550d-48ab-8853-bbc5ea7e47e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.814354 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-config" (OuterVolumeSpecName: "config") pod "6740a9b1-550d-48ab-8853-bbc5ea7e47e9" (UID: "6740a9b1-550d-48ab-8853-bbc5ea7e47e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.816591 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-serving-cert\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.817446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6740a9b1-550d-48ab-8853-bbc5ea7e47e9" (UID: "6740a9b1-550d-48ab-8853-bbc5ea7e47e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.819409 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-kube-api-access-fzmpz" (OuterVolumeSpecName: "kube-api-access-fzmpz") pod "6740a9b1-550d-48ab-8853-bbc5ea7e47e9" (UID: "6740a9b1-550d-48ab-8853-bbc5ea7e47e9"). InnerVolumeSpecName "kube-api-access-fzmpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.830314 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khp9l\" (UniqueName: \"kubernetes.io/projected/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-kube-api-access-khp9l\") pod \"route-controller-manager-7c77b7bdff-gnhss\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.912909 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.912974 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.912992 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzmpz\" (UniqueName: \"kubernetes.io/projected/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-kube-api-access-fzmpz\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[4770]: I1209 14:29:03.913012 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6740a9b1-550d-48ab-8853-bbc5ea7e47e9-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.009868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.157412 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" event={"ID":"b8e2cc36-a072-453a-989c-ee5daf63d7be","Type":"ContainerDied","Data":"c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25"} Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.157891 4770 scope.go:117] "RemoveContainer" containerID="c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.158290 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.157216 4770 generic.go:334] "Generic (PLEG): container finished" podID="b8e2cc36-a072-453a-989c-ee5daf63d7be" containerID="c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25" exitCode=0 Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.160961 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp" event={"ID":"b8e2cc36-a072-453a-989c-ee5daf63d7be","Type":"ContainerDied","Data":"b4b33134b78b4dfab578bd9523fae57083d4eca6c80bc1a9641a0c1231c3c6e2"} Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.167511 4770 generic.go:334] "Generic (PLEG): container finished" podID="6740a9b1-550d-48ab-8853-bbc5ea7e47e9" containerID="eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9" exitCode=0 Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.167706 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" event={"ID":"6740a9b1-550d-48ab-8853-bbc5ea7e47e9","Type":"ContainerDied","Data":"eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9"} Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.167840 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" event={"ID":"6740a9b1-550d-48ab-8853-bbc5ea7e47e9","Type":"ContainerDied","Data":"e809ebc9fb6c075fbe75488707abff25be3728247aeaed3756a55de300e47cb6"} Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.167974 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.183172 4770 scope.go:117] "RemoveContainer" containerID="c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25" Dec 09 14:29:04 crc kubenswrapper[4770]: E1209 14:29:04.191371 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25\": container with ID starting with c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25 not found: ID does not exist" containerID="c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.191422 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25"} err="failed to get container status \"c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25\": rpc error: code = NotFound desc = could not find container \"c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25\": container with ID starting with c7451565888fbeac89381564b445da2248f1c01e541e2f7473a4c4b21c57bf25 not found: ID does not exist" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.191450 4770 scope.go:117] "RemoveContainer" containerID="eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.194806 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp"] Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.202003 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6969c4b58c-4rxnp"] Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.210803 4770 scope.go:117] "RemoveContainer" containerID="eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9" Dec 09 14:29:04 crc kubenswrapper[4770]: E1209 14:29:04.211872 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9\": container with ID starting with eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9 not found: ID does not exist" containerID="eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.211964 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9"} err="failed to get container status \"eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9\": rpc error: code = NotFound desc = could not find container \"eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9\": container with ID starting with eaf33a3fba67fde319ae65fa6a0ce92f9699fdab138ed70844dcb11fc0dd17e9 not found: ID does not exist" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.220990 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv"] Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.227482 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b856cbcf7-4vvnv"] Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.235863 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss"] Dec 09 14:29:04 crc kubenswrapper[4770]: W1209 14:29:04.242738 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4ff7912_6ed3_4bf7_9eca_4a8ab7e2d1c7.slice/crio-ba725f3b06f3254a1e2bf9cb5f3f41ea489bfd857f69fde4ccac75da4d195cd4 WatchSource:0}: Error finding container ba725f3b06f3254a1e2bf9cb5f3f41ea489bfd857f69fde4ccac75da4d195cd4: Status 404 returned error can't find the container with id ba725f3b06f3254a1e2bf9cb5f3f41ea489bfd857f69fde4ccac75da4d195cd4 Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.595892 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6740a9b1-550d-48ab-8853-bbc5ea7e47e9" path="/var/lib/kubelet/pods/6740a9b1-550d-48ab-8853-bbc5ea7e47e9/volumes" Dec 09 14:29:04 crc kubenswrapper[4770]: I1209 14:29:04.596425 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e2cc36-a072-453a-989c-ee5daf63d7be" path="/var/lib/kubelet/pods/b8e2cc36-a072-453a-989c-ee5daf63d7be/volumes" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.177287 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" event={"ID":"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7","Type":"ContainerStarted","Data":"fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d"} Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.178148 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" event={"ID":"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7","Type":"ContainerStarted","Data":"ba725f3b06f3254a1e2bf9cb5f3f41ea489bfd857f69fde4ccac75da4d195cd4"} Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.178458 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.203301 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" podStartSLOduration=4.203277686 podStartE2EDuration="4.203277686s" podCreationTimestamp="2025-12-09 14:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:29:05.20130181 +0000 UTC m=+377.097503936" watchObservedRunningTime="2025-12-09 14:29:05.203277686 +0000 UTC m=+377.099479822" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.267800 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.672783 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r"] Dec 09 14:29:05 crc kubenswrapper[4770]: E1209 14:29:05.673197 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6740a9b1-550d-48ab-8853-bbc5ea7e47e9" containerName="controller-manager" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.673227 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6740a9b1-550d-48ab-8853-bbc5ea7e47e9" containerName="controller-manager" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.673386 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6740a9b1-550d-48ab-8853-bbc5ea7e47e9" containerName="controller-manager" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.674098 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.676647 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.677127 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.677304 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.677592 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.678114 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.682632 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss"] Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.683347 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.685797 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.689034 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r"] Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.739122 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-client-ca\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.739211 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4p7\" (UniqueName: \"kubernetes.io/projected/20d4ed64-7247-4656-b479-0b9c06c2fcc9-kube-api-access-cp4p7\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.739269 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d4ed64-7247-4656-b479-0b9c06c2fcc9-serving-cert\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.739379 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-proxy-ca-bundles\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.739465 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-config\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.840983 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-config\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.841030 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-client-ca\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.841064 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4p7\" (UniqueName: \"kubernetes.io/projected/20d4ed64-7247-4656-b479-0b9c06c2fcc9-kube-api-access-cp4p7\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.841093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d4ed64-7247-4656-b479-0b9c06c2fcc9-serving-cert\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.841134 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-proxy-ca-bundles\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.842237 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-proxy-ca-bundles\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.842263 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-client-ca\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.843081 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-config\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.849798 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d4ed64-7247-4656-b479-0b9c06c2fcc9-serving-cert\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:05 crc kubenswrapper[4770]: I1209 14:29:05.872940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4p7\" (UniqueName: \"kubernetes.io/projected/20d4ed64-7247-4656-b479-0b9c06c2fcc9-kube-api-access-cp4p7\") pod \"controller-manager-66cc94c8b8-2vc9r\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:06 crc kubenswrapper[4770]: I1209 14:29:06.000055 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:06 crc kubenswrapper[4770]: I1209 14:29:06.226447 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r"] Dec 09 14:29:06 crc kubenswrapper[4770]: W1209 14:29:06.233172 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20d4ed64_7247_4656_b479_0b9c06c2fcc9.slice/crio-6afe144ca9e1b3b8af63fd49b9c685ba04873d7209345fd0fc454ec1c71e070f WatchSource:0}: Error finding container 6afe144ca9e1b3b8af63fd49b9c685ba04873d7209345fd0fc454ec1c71e070f: Status 404 returned error can't find the container with id 6afe144ca9e1b3b8af63fd49b9c685ba04873d7209345fd0fc454ec1c71e070f Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.196871 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" event={"ID":"20d4ed64-7247-4656-b479-0b9c06c2fcc9","Type":"ContainerStarted","Data":"4326034ba00ce30b1b60c33d2d0894fa92cf449df24190167e0205c29376019e"} Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.197121 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" event={"ID":"20d4ed64-7247-4656-b479-0b9c06c2fcc9","Type":"ContainerStarted","Data":"6afe144ca9e1b3b8af63fd49b9c685ba04873d7209345fd0fc454ec1c71e070f"} Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.196982 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" podUID="d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" containerName="route-controller-manager" containerID="cri-o://fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d" gracePeriod=30 Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.229069 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" podStartSLOduration=2.229044798 podStartE2EDuration="2.229044798s" podCreationTimestamp="2025-12-09 14:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:29:07.224477679 +0000 UTC m=+379.120679835" watchObservedRunningTime="2025-12-09 14:29:07.229044798 +0000 UTC m=+379.125246934" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.554056 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.589175 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z"] Dec 09 14:29:07 crc kubenswrapper[4770]: E1209 14:29:07.589445 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" containerName="route-controller-manager" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.589465 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" containerName="route-controller-manager" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.589584 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" containerName="route-controller-manager" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.590050 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.595702 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z"] Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674108 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-client-ca\") pod \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674171 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-serving-cert\") pod \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674195 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-config\") pod \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674215 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khp9l\" (UniqueName: \"kubernetes.io/projected/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-kube-api-access-khp9l\") pod \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\" (UID: \"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7\") " Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-client-ca\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674360 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4drp\" (UniqueName: \"kubernetes.io/projected/f3d558e2-b766-4578-a013-698d51ecf51d-kube-api-access-k4drp\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674385 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-config\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674409 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d558e2-b766-4578-a013-698d51ecf51d-serving-cert\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.674661 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" (UID: "d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.675473 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-config" (OuterVolumeSpecName: "config") pod "d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" (UID: "d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.681091 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" (UID: "d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.694052 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-kube-api-access-khp9l" (OuterVolumeSpecName: "kube-api-access-khp9l") pod "d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" (UID: "d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7"). InnerVolumeSpecName "kube-api-access-khp9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.775319 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-config\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.775678 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d558e2-b766-4578-a013-698d51ecf51d-serving-cert\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.775964 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-client-ca\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.776045 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4drp\" (UniqueName: \"kubernetes.io/projected/f3d558e2-b766-4578-a013-698d51ecf51d-kube-api-access-k4drp\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.776140 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.776161 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khp9l\" (UniqueName: \"kubernetes.io/projected/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-kube-api-access-khp9l\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.776189 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.776205 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.776948 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-client-ca\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.777552 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-config\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.780029 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d558e2-b766-4578-a013-698d51ecf51d-serving-cert\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.793532 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4drp\" (UniqueName: \"kubernetes.io/projected/f3d558e2-b766-4578-a013-698d51ecf51d-kube-api-access-k4drp\") pod \"route-controller-manager-7b895dcf8-6dx7z\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:07 crc kubenswrapper[4770]: I1209 14:29:07.907443 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.204531 4770 generic.go:334] "Generic (PLEG): container finished" podID="d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" containerID="fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d" exitCode=0 Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.204632 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.204648 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" event={"ID":"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7","Type":"ContainerDied","Data":"fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d"} Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.205027 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss" event={"ID":"d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7","Type":"ContainerDied","Data":"ba725f3b06f3254a1e2bf9cb5f3f41ea489bfd857f69fde4ccac75da4d195cd4"} Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.205083 4770 scope.go:117] "RemoveContainer" containerID="fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d" Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.205235 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.215032 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.230308 4770 scope.go:117] "RemoveContainer" containerID="fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d" Dec 09 14:29:08 crc kubenswrapper[4770]: E1209 14:29:08.231493 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d\": container with ID starting with fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d not found: ID does not exist" containerID="fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d" Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.231534 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d"} err="failed to get container status \"fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d\": rpc error: code = NotFound desc = could not find container \"fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d\": container with ID starting with fb1d1fe87cdae5492abf595ef22ba884a512bb15a5e2a372613f5f82dd90147d not found: ID does not exist" Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.252900 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss"] Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.259190 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-gnhss"] Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.363968 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z"] Dec 09 14:29:08 crc kubenswrapper[4770]: I1209 14:29:08.598147 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7" path="/var/lib/kubelet/pods/d4ff7912-6ed3-4bf7-9eca-4a8ab7e2d1c7/volumes" Dec 09 14:29:09 crc kubenswrapper[4770]: I1209 14:29:09.212691 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" event={"ID":"f3d558e2-b766-4578-a013-698d51ecf51d","Type":"ContainerStarted","Data":"8f68425ca70a66b13f120db00c3019dbc9648674ee54d99c5048344beea0f1bf"} Dec 09 14:29:09 crc kubenswrapper[4770]: I1209 14:29:09.212810 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" event={"ID":"f3d558e2-b766-4578-a013-698d51ecf51d","Type":"ContainerStarted","Data":"14ae577c51b4b8ec0137bac936e1549ef9c949ecd26800aa50c6d585786567f0"} Dec 09 14:29:09 crc kubenswrapper[4770]: I1209 14:29:09.235285 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" podStartSLOduration=4.235265401 podStartE2EDuration="4.235265401s" podCreationTimestamp="2025-12-09 14:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:29:09.232487643 +0000 UTC m=+381.128689799" watchObservedRunningTime="2025-12-09 14:29:09.235265401 +0000 UTC m=+381.131467537" Dec 09 14:29:10 crc kubenswrapper[4770]: I1209 14:29:10.223903 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:10 crc kubenswrapper[4770]: I1209 14:29:10.229682 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:14 crc kubenswrapper[4770]: I1209 14:29:14.243985 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:29:14 crc kubenswrapper[4770]: I1209 14:29:14.244965 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.508940 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-krgzt"] Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.510535 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.520014 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-krgzt"] Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.681841 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-trusted-ca\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.681894 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.681931 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.681964 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-bound-sa-token\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.681982 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-registry-tls\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.681998 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58wgh\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-kube-api-access-58wgh\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.682022 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-registry-certificates\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.682040 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.701842 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.783565 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-trusted-ca\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.783626 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.783703 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-bound-sa-token\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.783763 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-registry-tls\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.783787 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58wgh\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-kube-api-access-58wgh\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.783827 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-registry-certificates\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.783853 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.784338 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.785077 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-trusted-ca\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.785346 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-registry-certificates\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.790833 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.791280 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-registry-tls\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.801503 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58wgh\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-kube-api-access-58wgh\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.801847 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d-bound-sa-token\") pod \"image-registry-66df7c8f76-krgzt\" (UID: \"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:31 crc kubenswrapper[4770]: I1209 14:29:31.833469 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:32 crc kubenswrapper[4770]: I1209 14:29:32.287726 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-krgzt"] Dec 09 14:29:32 crc kubenswrapper[4770]: W1209 14:29:32.296711 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6db266b_6dc9_4fbf_aba0_a85eb8f13a2d.slice/crio-1113dbb91336e90127e2fa30dd1ed28291d288c38b3ff00f9a227e99419c1173 WatchSource:0}: Error finding container 1113dbb91336e90127e2fa30dd1ed28291d288c38b3ff00f9a227e99419c1173: Status 404 returned error can't find the container with id 1113dbb91336e90127e2fa30dd1ed28291d288c38b3ff00f9a227e99419c1173 Dec 09 14:29:32 crc kubenswrapper[4770]: I1209 14:29:32.379035 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" event={"ID":"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d","Type":"ContainerStarted","Data":"1113dbb91336e90127e2fa30dd1ed28291d288c38b3ff00f9a227e99419c1173"} Dec 09 14:29:34 crc kubenswrapper[4770]: I1209 14:29:34.402180 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" event={"ID":"c6db266b-6dc9-4fbf-aba0-a85eb8f13a2d","Type":"ContainerStarted","Data":"1745369e937938772f3e5007f96c4356d0d342451675d8645206a2e96a2f8541"} Dec 09 14:29:34 crc kubenswrapper[4770]: I1209 14:29:34.402998 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:34 crc kubenswrapper[4770]: I1209 14:29:34.438028 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" podStartSLOduration=3.437997637 podStartE2EDuration="3.437997637s" podCreationTimestamp="2025-12-09 14:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:29:34.426551513 +0000 UTC m=+406.322753659" watchObservedRunningTime="2025-12-09 14:29:34.437997637 +0000 UTC m=+406.334199803" Dec 09 14:29:38 crc kubenswrapper[4770]: I1209 14:29:38.428120 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z"] Dec 09 14:29:38 crc kubenswrapper[4770]: I1209 14:29:38.428482 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" podUID="f3d558e2-b766-4578-a013-698d51ecf51d" containerName="route-controller-manager" containerID="cri-o://8f68425ca70a66b13f120db00c3019dbc9648674ee54d99c5048344beea0f1bf" gracePeriod=30 Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.438148 4770 generic.go:334] "Generic (PLEG): container finished" podID="f3d558e2-b766-4578-a013-698d51ecf51d" containerID="8f68425ca70a66b13f120db00c3019dbc9648674ee54d99c5048344beea0f1bf" exitCode=0 Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.438253 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" event={"ID":"f3d558e2-b766-4578-a013-698d51ecf51d","Type":"ContainerDied","Data":"8f68425ca70a66b13f120db00c3019dbc9648674ee54d99c5048344beea0f1bf"} Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.632467 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.660984 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5"] Dec 09 14:29:39 crc kubenswrapper[4770]: E1209 14:29:39.661261 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d558e2-b766-4578-a013-698d51ecf51d" containerName="route-controller-manager" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.661278 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d558e2-b766-4578-a013-698d51ecf51d" containerName="route-controller-manager" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.661439 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d558e2-b766-4578-a013-698d51ecf51d" containerName="route-controller-manager" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.662217 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.683653 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5"] Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.694600 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-client-ca\") pod \"f3d558e2-b766-4578-a013-698d51ecf51d\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.694966 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-config\") pod \"f3d558e2-b766-4578-a013-698d51ecf51d\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.695101 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d558e2-b766-4578-a013-698d51ecf51d-serving-cert\") pod \"f3d558e2-b766-4578-a013-698d51ecf51d\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.695193 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4drp\" (UniqueName: \"kubernetes.io/projected/f3d558e2-b766-4578-a013-698d51ecf51d-kube-api-access-k4drp\") pod \"f3d558e2-b766-4578-a013-698d51ecf51d\" (UID: \"f3d558e2-b766-4578-a013-698d51ecf51d\") " Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.697325 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-client-ca" (OuterVolumeSpecName: "client-ca") pod "f3d558e2-b766-4578-a013-698d51ecf51d" (UID: "f3d558e2-b766-4578-a013-698d51ecf51d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.697327 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-config" (OuterVolumeSpecName: "config") pod "f3d558e2-b766-4578-a013-698d51ecf51d" (UID: "f3d558e2-b766-4578-a013-698d51ecf51d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.714118 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d558e2-b766-4578-a013-698d51ecf51d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f3d558e2-b766-4578-a013-698d51ecf51d" (UID: "f3d558e2-b766-4578-a013-698d51ecf51d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.715120 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d558e2-b766-4578-a013-698d51ecf51d-kube-api-access-k4drp" (OuterVolumeSpecName: "kube-api-access-k4drp") pod "f3d558e2-b766-4578-a013-698d51ecf51d" (UID: "f3d558e2-b766-4578-a013-698d51ecf51d"). InnerVolumeSpecName "kube-api-access-k4drp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.796621 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-config\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.796688 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-client-ca\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.796711 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gnpk\" (UniqueName: \"kubernetes.io/projected/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-kube-api-access-5gnpk\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.796775 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-serving-cert\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.796865 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4drp\" (UniqueName: \"kubernetes.io/projected/f3d558e2-b766-4578-a013-698d51ecf51d-kube-api-access-k4drp\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.796876 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.796885 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d558e2-b766-4578-a013-698d51ecf51d-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.796894 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d558e2-b766-4578-a013-698d51ecf51d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.898010 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-config\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.898054 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-client-ca\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.898078 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gnpk\" (UniqueName: \"kubernetes.io/projected/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-kube-api-access-5gnpk\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.898124 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-serving-cert\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.899922 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-client-ca\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.901575 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-serving-cert\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.901658 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-config\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.915227 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gnpk\" (UniqueName: \"kubernetes.io/projected/cf8a9270-586d-4eaf-981c-34a7f23b7a1e-kube-api-access-5gnpk\") pod \"route-controller-manager-7c77b7bdff-sd4j5\" (UID: \"cf8a9270-586d-4eaf-981c-34a7f23b7a1e\") " pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:39 crc kubenswrapper[4770]: I1209 14:29:39.976132 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:40 crc kubenswrapper[4770]: I1209 14:29:40.330339 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5"] Dec 09 14:29:40 crc kubenswrapper[4770]: W1209 14:29:40.340273 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf8a9270_586d_4eaf_981c_34a7f23b7a1e.slice/crio-35b7b75a7f619af896820abe5909616aefa03df0b784d704d7defa7d78d5beae WatchSource:0}: Error finding container 35b7b75a7f619af896820abe5909616aefa03df0b784d704d7defa7d78d5beae: Status 404 returned error can't find the container with id 35b7b75a7f619af896820abe5909616aefa03df0b784d704d7defa7d78d5beae Dec 09 14:29:40 crc kubenswrapper[4770]: I1209 14:29:40.447706 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" event={"ID":"f3d558e2-b766-4578-a013-698d51ecf51d","Type":"ContainerDied","Data":"14ae577c51b4b8ec0137bac936e1549ef9c949ecd26800aa50c6d585786567f0"} Dec 09 14:29:40 crc kubenswrapper[4770]: I1209 14:29:40.447757 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z" Dec 09 14:29:40 crc kubenswrapper[4770]: I1209 14:29:40.448213 4770 scope.go:117] "RemoveContainer" containerID="8f68425ca70a66b13f120db00c3019dbc9648674ee54d99c5048344beea0f1bf" Dec 09 14:29:40 crc kubenswrapper[4770]: I1209 14:29:40.450971 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" event={"ID":"cf8a9270-586d-4eaf-981c-34a7f23b7a1e","Type":"ContainerStarted","Data":"35b7b75a7f619af896820abe5909616aefa03df0b784d704d7defa7d78d5beae"} Dec 09 14:29:40 crc kubenswrapper[4770]: I1209 14:29:40.477901 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z"] Dec 09 14:29:40 crc kubenswrapper[4770]: I1209 14:29:40.482989 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-6dx7z"] Dec 09 14:29:40 crc kubenswrapper[4770]: I1209 14:29:40.603067 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d558e2-b766-4578-a013-698d51ecf51d" path="/var/lib/kubelet/pods/f3d558e2-b766-4578-a013-698d51ecf51d/volumes" Dec 09 14:29:42 crc kubenswrapper[4770]: I1209 14:29:42.478621 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" event={"ID":"cf8a9270-586d-4eaf-981c-34a7f23b7a1e","Type":"ContainerStarted","Data":"e96c5ff617459e56f3534f65e0ba50c489ab5454a1f7cac3e45f77b0462432d3"} Dec 09 14:29:43 crc kubenswrapper[4770]: I1209 14:29:43.484054 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:43 crc kubenswrapper[4770]: I1209 14:29:43.489481 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" Dec 09 14:29:43 crc kubenswrapper[4770]: I1209 14:29:43.505345 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c77b7bdff-sd4j5" podStartSLOduration=5.505324933 podStartE2EDuration="5.505324933s" podCreationTimestamp="2025-12-09 14:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:29:43.49940694 +0000 UTC m=+415.395609096" watchObservedRunningTime="2025-12-09 14:29:43.505324933 +0000 UTC m=+415.401527069" Dec 09 14:29:44 crc kubenswrapper[4770]: I1209 14:29:44.243458 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:29:44 crc kubenswrapper[4770]: I1209 14:29:44.243901 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:29:44 crc kubenswrapper[4770]: I1209 14:29:44.243960 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:29:44 crc kubenswrapper[4770]: I1209 14:29:44.244647 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8d1eda564365c5c920f110d0bb1f391b787b6130b8ab2f01b19986d8be82924"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:29:44 crc kubenswrapper[4770]: I1209 14:29:44.244722 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://e8d1eda564365c5c920f110d0bb1f391b787b6130b8ab2f01b19986d8be82924" gracePeriod=600 Dec 09 14:29:45 crc kubenswrapper[4770]: I1209 14:29:45.494960 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="e8d1eda564365c5c920f110d0bb1f391b787b6130b8ab2f01b19986d8be82924" exitCode=0 Dec 09 14:29:45 crc kubenswrapper[4770]: I1209 14:29:45.495482 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"e8d1eda564365c5c920f110d0bb1f391b787b6130b8ab2f01b19986d8be82924"} Dec 09 14:29:45 crc kubenswrapper[4770]: I1209 14:29:45.495512 4770 scope.go:117] "RemoveContainer" containerID="343ebc69d1e7d875d64e8ee6ff5383f8e82f2512953764716d028a86e264026f" Dec 09 14:29:46 crc kubenswrapper[4770]: I1209 14:29:46.502584 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"c4143fcf6193bb8d37b6aa9f74630ea967df19039b5e904f79a07122fe7fe763"} Dec 09 14:29:51 crc kubenswrapper[4770]: I1209 14:29:51.839076 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-krgzt" Dec 09 14:29:51 crc kubenswrapper[4770]: I1209 14:29:51.900829 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ztlqj"] Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.381924 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kfhcl"] Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.382968 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kfhcl" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerName="registry-server" containerID="cri-o://87510a6667678ee2f5e94f9049d9e3341b066695fd62ba467a7c2059b27a67d2" gracePeriod=2 Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.426243 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r"] Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.426488 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" podUID="20d4ed64-7247-4656-b479-0b9c06c2fcc9" containerName="controller-manager" containerID="cri-o://4326034ba00ce30b1b60c33d2d0894fa92cf449df24190167e0205c29376019e" gracePeriod=30 Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.576886 4770 generic.go:334] "Generic (PLEG): container finished" podID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerID="87510a6667678ee2f5e94f9049d9e3341b066695fd62ba467a7c2059b27a67d2" exitCode=0 Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.576966 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfhcl" event={"ID":"d6f083cd-d57a-4162-a704-37cd9dd3be45","Type":"ContainerDied","Data":"87510a6667678ee2f5e94f9049d9e3341b066695fd62ba467a7c2059b27a67d2"} Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.580346 4770 generic.go:334] "Generic (PLEG): container finished" podID="20d4ed64-7247-4656-b479-0b9c06c2fcc9" containerID="4326034ba00ce30b1b60c33d2d0894fa92cf449df24190167e0205c29376019e" exitCode=0 Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.580396 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" event={"ID":"20d4ed64-7247-4656-b479-0b9c06c2fcc9","Type":"ContainerDied","Data":"4326034ba00ce30b1b60c33d2d0894fa92cf449df24190167e0205c29376019e"} Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.843191 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:29:58 crc kubenswrapper[4770]: I1209 14:29:58.905611 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.028680 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-proxy-ca-bundles\") pod \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.028798 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d4ed64-7247-4656-b479-0b9c06c2fcc9-serving-cert\") pod \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.028825 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-client-ca\") pod \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.028855 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crffh\" (UniqueName: \"kubernetes.io/projected/d6f083cd-d57a-4162-a704-37cd9dd3be45-kube-api-access-crffh\") pod \"d6f083cd-d57a-4162-a704-37cd9dd3be45\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.028882 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4p7\" (UniqueName: \"kubernetes.io/projected/20d4ed64-7247-4656-b479-0b9c06c2fcc9-kube-api-access-cp4p7\") pod \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.028913 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-catalog-content\") pod \"d6f083cd-d57a-4162-a704-37cd9dd3be45\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.028942 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-utilities\") pod \"d6f083cd-d57a-4162-a704-37cd9dd3be45\" (UID: \"d6f083cd-d57a-4162-a704-37cd9dd3be45\") " Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.028979 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-config\") pod \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\" (UID: \"20d4ed64-7247-4656-b479-0b9c06c2fcc9\") " Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.029513 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "20d4ed64-7247-4656-b479-0b9c06c2fcc9" (UID: "20d4ed64-7247-4656-b479-0b9c06c2fcc9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.029529 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "20d4ed64-7247-4656-b479-0b9c06c2fcc9" (UID: "20d4ed64-7247-4656-b479-0b9c06c2fcc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.029666 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-config" (OuterVolumeSpecName: "config") pod "20d4ed64-7247-4656-b479-0b9c06c2fcc9" (UID: "20d4ed64-7247-4656-b479-0b9c06c2fcc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.030553 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-utilities" (OuterVolumeSpecName: "utilities") pod "d6f083cd-d57a-4162-a704-37cd9dd3be45" (UID: "d6f083cd-d57a-4162-a704-37cd9dd3be45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.034511 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d4ed64-7247-4656-b479-0b9c06c2fcc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20d4ed64-7247-4656-b479-0b9c06c2fcc9" (UID: "20d4ed64-7247-4656-b479-0b9c06c2fcc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.034963 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f083cd-d57a-4162-a704-37cd9dd3be45-kube-api-access-crffh" (OuterVolumeSpecName: "kube-api-access-crffh") pod "d6f083cd-d57a-4162-a704-37cd9dd3be45" (UID: "d6f083cd-d57a-4162-a704-37cd9dd3be45"). InnerVolumeSpecName "kube-api-access-crffh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.036202 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d4ed64-7247-4656-b479-0b9c06c2fcc9-kube-api-access-cp4p7" (OuterVolumeSpecName: "kube-api-access-cp4p7") pod "20d4ed64-7247-4656-b479-0b9c06c2fcc9" (UID: "20d4ed64-7247-4656-b479-0b9c06c2fcc9"). InnerVolumeSpecName "kube-api-access-cp4p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.078249 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6f083cd-d57a-4162-a704-37cd9dd3be45" (UID: "d6f083cd-d57a-4162-a704-37cd9dd3be45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.130214 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.130252 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.130261 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.130272 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d4ed64-7247-4656-b479-0b9c06c2fcc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.130281 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20d4ed64-7247-4656-b479-0b9c06c2fcc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.130290 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crffh\" (UniqueName: \"kubernetes.io/projected/d6f083cd-d57a-4162-a704-37cd9dd3be45-kube-api-access-crffh\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.130299 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4p7\" (UniqueName: \"kubernetes.io/projected/20d4ed64-7247-4656-b479-0b9c06c2fcc9-kube-api-access-cp4p7\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.130307 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f083cd-d57a-4162-a704-37cd9dd3be45-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.585473 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" event={"ID":"20d4ed64-7247-4656-b479-0b9c06c2fcc9","Type":"ContainerDied","Data":"6afe144ca9e1b3b8af63fd49b9c685ba04873d7209345fd0fc454ec1c71e070f"} Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.585510 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.585530 4770 scope.go:117] "RemoveContainer" containerID="4326034ba00ce30b1b60c33d2d0894fa92cf449df24190167e0205c29376019e" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.587593 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfhcl" event={"ID":"d6f083cd-d57a-4162-a704-37cd9dd3be45","Type":"ContainerDied","Data":"45a711551c0cd9fd076090450b4034a62d20c2dad82218d30dd37029e7b96f60"} Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.587662 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfhcl" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.606681 4770 scope.go:117] "RemoveContainer" containerID="87510a6667678ee2f5e94f9049d9e3341b066695fd62ba467a7c2059b27a67d2" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.620971 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r"] Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.636998 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-2vc9r"] Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.642988 4770 scope.go:117] "RemoveContainer" containerID="cf529e3d167f79fa87c1552bf1904a9dc7f5494d4d60022fd115cb42263f9d1e" Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.646674 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kfhcl"] Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.649576 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kfhcl"] Dec 09 14:29:59 crc kubenswrapper[4770]: I1209 14:29:59.659477 4770 scope.go:117] "RemoveContainer" containerID="f39ba7ab6e253302df4446391bd8ee8d632aa4ee7a943fe225c83cb19d20ed5c" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.172317 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf"] Dec 09 14:30:00 crc kubenswrapper[4770]: E1209 14:30:00.172916 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerName="extract-utilities" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.172932 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerName="extract-utilities" Dec 09 14:30:00 crc kubenswrapper[4770]: E1209 14:30:00.172949 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerName="registry-server" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.172955 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerName="registry-server" Dec 09 14:30:00 crc kubenswrapper[4770]: E1209 14:30:00.172964 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerName="extract-content" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.172970 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerName="extract-content" Dec 09 14:30:00 crc kubenswrapper[4770]: E1209 14:30:00.172980 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d4ed64-7247-4656-b479-0b9c06c2fcc9" containerName="controller-manager" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.172986 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d4ed64-7247-4656-b479-0b9c06c2fcc9" containerName="controller-manager" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.173102 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d4ed64-7247-4656-b479-0b9c06c2fcc9" containerName="controller-manager" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.173120 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" containerName="registry-server" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.173654 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.178870 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.179291 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.179783 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf"] Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.346511 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmzz7\" (UniqueName: \"kubernetes.io/projected/d34ea339-53b7-4e3d-981b-120b80ad0385-kube-api-access-pmzz7\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.346592 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d34ea339-53b7-4e3d-981b-120b80ad0385-secret-volume\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.346787 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d34ea339-53b7-4e3d-981b-120b80ad0385-config-volume\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.448172 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d34ea339-53b7-4e3d-981b-120b80ad0385-config-volume\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.448249 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmzz7\" (UniqueName: \"kubernetes.io/projected/d34ea339-53b7-4e3d-981b-120b80ad0385-kube-api-access-pmzz7\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.448285 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d34ea339-53b7-4e3d-981b-120b80ad0385-secret-volume\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.449787 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d34ea339-53b7-4e3d-981b-120b80ad0385-config-volume\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.462603 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d34ea339-53b7-4e3d-981b-120b80ad0385-secret-volume\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.473764 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq"] Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.474570 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.479981 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.480407 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.480745 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.480952 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.481092 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.482824 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.485093 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.485254 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq"] Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.490452 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmzz7\" (UniqueName: \"kubernetes.io/projected/d34ea339-53b7-4e3d-981b-120b80ad0385-kube-api-access-pmzz7\") pod \"collect-profiles-29421510-j7vnf\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.504985 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.581799 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqdll"] Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.582291 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cqdll" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" containerName="registry-server" containerID="cri-o://fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337" gracePeriod=2 Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.598630 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d4ed64-7247-4656-b479-0b9c06c2fcc9" path="/var/lib/kubelet/pods/20d4ed64-7247-4656-b479-0b9c06c2fcc9/volumes" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.599390 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f083cd-d57a-4162-a704-37cd9dd3be45" path="/var/lib/kubelet/pods/d6f083cd-d57a-4162-a704-37cd9dd3be45/volumes" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.651215 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-config\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.651316 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-proxy-ca-bundles\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.651351 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-client-ca\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.651403 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-serving-cert\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.651437 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42ts\" (UniqueName: \"kubernetes.io/projected/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-kube-api-access-n42ts\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.767616 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-config\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.767693 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-proxy-ca-bundles\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.767760 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-client-ca\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.767827 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-serving-cert\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.767877 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42ts\" (UniqueName: \"kubernetes.io/projected/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-kube-api-access-n42ts\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.770512 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-client-ca\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.772693 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-proxy-ca-bundles\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.773607 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-config\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.784629 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ks2lq"] Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.785014 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ks2lq" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerName="registry-server" containerID="cri-o://41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a" gracePeriod=2 Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.791566 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-serving-cert\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.800461 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42ts\" (UniqueName: \"kubernetes.io/projected/bd7cb165-7b4e-4e8b-8e32-d7633aea449b-kube-api-access-n42ts\") pod \"controller-manager-5cbc7b4b57-cmtrq\" (UID: \"bd7cb165-7b4e-4e8b-8e32-d7633aea449b\") " pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.833141 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.955972 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf"] Dec 09 14:30:00 crc kubenswrapper[4770]: W1209 14:30:00.969026 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd34ea339_53b7_4e3d_981b_120b80ad0385.slice/crio-1636e4c20aa52ac102a07f0cd243b259bfb8b26850c5e1908861bc1ffe480130 WatchSource:0}: Error finding container 1636e4c20aa52ac102a07f0cd243b259bfb8b26850c5e1908861bc1ffe480130: Status 404 returned error can't find the container with id 1636e4c20aa52ac102a07f0cd243b259bfb8b26850c5e1908861bc1ffe480130 Dec 09 14:30:00 crc kubenswrapper[4770]: I1209 14:30:00.991166 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.072384 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-catalog-content\") pod \"6a25b549-8e55-47e1-ba51-781119aefc25\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.072669 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5phs\" (UniqueName: \"kubernetes.io/projected/6a25b549-8e55-47e1-ba51-781119aefc25-kube-api-access-f5phs\") pod \"6a25b549-8e55-47e1-ba51-781119aefc25\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.072803 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-utilities\") pod \"6a25b549-8e55-47e1-ba51-781119aefc25\" (UID: \"6a25b549-8e55-47e1-ba51-781119aefc25\") " Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.073827 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-utilities" (OuterVolumeSpecName: "utilities") pod "6a25b549-8e55-47e1-ba51-781119aefc25" (UID: "6a25b549-8e55-47e1-ba51-781119aefc25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.077226 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a25b549-8e55-47e1-ba51-781119aefc25-kube-api-access-f5phs" (OuterVolumeSpecName: "kube-api-access-f5phs") pod "6a25b549-8e55-47e1-ba51-781119aefc25" (UID: "6a25b549-8e55-47e1-ba51-781119aefc25"). InnerVolumeSpecName "kube-api-access-f5phs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.114642 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a25b549-8e55-47e1-ba51-781119aefc25" (UID: "6a25b549-8e55-47e1-ba51-781119aefc25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.174229 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.174271 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25b549-8e55-47e1-ba51-781119aefc25-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.174286 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5phs\" (UniqueName: \"kubernetes.io/projected/6a25b549-8e55-47e1-ba51-781119aefc25-kube-api-access-f5phs\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.176523 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.274991 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gwjl\" (UniqueName: \"kubernetes.io/projected/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-kube-api-access-6gwjl\") pod \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.275110 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-utilities\") pod \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.275184 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-catalog-content\") pod \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\" (UID: \"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824\") " Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.276082 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-utilities" (OuterVolumeSpecName: "utilities") pod "67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" (UID: "67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.279970 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-kube-api-access-6gwjl" (OuterVolumeSpecName: "kube-api-access-6gwjl") pod "67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" (UID: "67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824"). InnerVolumeSpecName "kube-api-access-6gwjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.300691 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq"] Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.376185 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.376218 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gwjl\" (UniqueName: \"kubernetes.io/projected/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-kube-api-access-6gwjl\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.408930 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" (UID: "67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.476992 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.609828 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" event={"ID":"bd7cb165-7b4e-4e8b-8e32-d7633aea449b","Type":"ContainerStarted","Data":"6d1413103962769c46b814505285944865ce27242fe0ef5eda3629767ac96873"} Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.609880 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" event={"ID":"bd7cb165-7b4e-4e8b-8e32-d7633aea449b","Type":"ContainerStarted","Data":"acfe7c43d92d9d4611c1da60aa3a6e6e3e6fbbb9f5676f0d1abd51964e812fbd"} Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.610034 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.616102 4770 generic.go:334] "Generic (PLEG): container finished" podID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerID="41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a" exitCode=0 Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.616198 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ks2lq" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.616164 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks2lq" event={"ID":"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824","Type":"ContainerDied","Data":"41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a"} Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.616255 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks2lq" event={"ID":"67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824","Type":"ContainerDied","Data":"128bc728fd44eeb4284852ee1baae2e45ed552aa3ec1dc6c02440032d3f360cf"} Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.616278 4770 scope.go:117] "RemoveContainer" containerID="41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.619662 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.622816 4770 generic.go:334] "Generic (PLEG): container finished" podID="6a25b549-8e55-47e1-ba51-781119aefc25" containerID="fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337" exitCode=0 Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.622865 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqdll" event={"ID":"6a25b549-8e55-47e1-ba51-781119aefc25","Type":"ContainerDied","Data":"fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337"} Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.622926 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqdll" event={"ID":"6a25b549-8e55-47e1-ba51-781119aefc25","Type":"ContainerDied","Data":"aea267640d9f3b70438e5404f36336b5b4306f3d27b629dcb54cc26f582e6f74"} Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.622882 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqdll" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.624676 4770 generic.go:334] "Generic (PLEG): container finished" podID="d34ea339-53b7-4e3d-981b-120b80ad0385" containerID="cca6704df730fc89b79f886353e1fb74048a40b644f16d273c7103f2e102d636" exitCode=0 Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.624721 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" event={"ID":"d34ea339-53b7-4e3d-981b-120b80ad0385","Type":"ContainerDied","Data":"cca6704df730fc89b79f886353e1fb74048a40b644f16d273c7103f2e102d636"} Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.624761 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" event={"ID":"d34ea339-53b7-4e3d-981b-120b80ad0385","Type":"ContainerStarted","Data":"1636e4c20aa52ac102a07f0cd243b259bfb8b26850c5e1908861bc1ffe480130"} Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.630283 4770 scope.go:117] "RemoveContainer" containerID="b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.642257 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cbc7b4b57-cmtrq" podStartSLOduration=3.642197976 podStartE2EDuration="3.642197976s" podCreationTimestamp="2025-12-09 14:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:30:01.635561614 +0000 UTC m=+433.531763760" watchObservedRunningTime="2025-12-09 14:30:01.642197976 +0000 UTC m=+433.538400122" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.677630 4770 scope.go:117] "RemoveContainer" containerID="fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.722975 4770 scope.go:117] "RemoveContainer" containerID="41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a" Dec 09 14:30:01 crc kubenswrapper[4770]: E1209 14:30:01.723376 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a\": container with ID starting with 41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a not found: ID does not exist" containerID="41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.723422 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a"} err="failed to get container status \"41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a\": rpc error: code = NotFound desc = could not find container \"41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a\": container with ID starting with 41b7bfae3c9fce6287d8138fce64ea5e49d6b1b9b5719d9b47d4817f1c6e720a not found: ID does not exist" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.723448 4770 scope.go:117] "RemoveContainer" containerID="b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0" Dec 09 14:30:01 crc kubenswrapper[4770]: E1209 14:30:01.723693 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0\": container with ID starting with b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0 not found: ID does not exist" containerID="b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.723733 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0"} err="failed to get container status \"b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0\": rpc error: code = NotFound desc = could not find container \"b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0\": container with ID starting with b6618d6d2804899e4434197e25b1ee01150f97e2ae032e13dc9068ac25c0d3c0 not found: ID does not exist" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.723754 4770 scope.go:117] "RemoveContainer" containerID="fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7" Dec 09 14:30:01 crc kubenswrapper[4770]: E1209 14:30:01.723946 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7\": container with ID starting with fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7 not found: ID does not exist" containerID="fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.723969 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7"} err="failed to get container status \"fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7\": rpc error: code = NotFound desc = could not find container \"fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7\": container with ID starting with fec6c0fb951136b58820cc16d5a32f9823a357d3559e016bce67cc1572621cd7 not found: ID does not exist" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.723984 4770 scope.go:117] "RemoveContainer" containerID="fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.731403 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ks2lq"] Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.736912 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ks2lq"] Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.744935 4770 scope.go:117] "RemoveContainer" containerID="4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.752138 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqdll"] Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.756072 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqdll"] Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.767580 4770 scope.go:117] "RemoveContainer" containerID="046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.797064 4770 scope.go:117] "RemoveContainer" containerID="fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337" Dec 09 14:30:01 crc kubenswrapper[4770]: E1209 14:30:01.797451 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337\": container with ID starting with fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337 not found: ID does not exist" containerID="fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.797501 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337"} err="failed to get container status \"fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337\": rpc error: code = NotFound desc = could not find container \"fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337\": container with ID starting with fa65fd3a801e9c3dc058913c6b906a461b3a05eb8d2b9345e13e73ceae8e8337 not found: ID does not exist" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.797524 4770 scope.go:117] "RemoveContainer" containerID="4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8" Dec 09 14:30:01 crc kubenswrapper[4770]: E1209 14:30:01.797885 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8\": container with ID starting with 4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8 not found: ID does not exist" containerID="4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.797941 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8"} err="failed to get container status \"4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8\": rpc error: code = NotFound desc = could not find container \"4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8\": container with ID starting with 4577ec67fa74754531f937a3cbfb1b7468235700c60f401287a99b9dd349a7f8 not found: ID does not exist" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.797954 4770 scope.go:117] "RemoveContainer" containerID="046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466" Dec 09 14:30:01 crc kubenswrapper[4770]: E1209 14:30:01.798255 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466\": container with ID starting with 046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466 not found: ID does not exist" containerID="046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466" Dec 09 14:30:01 crc kubenswrapper[4770]: I1209 14:30:01.798274 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466"} err="failed to get container status \"046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466\": rpc error: code = NotFound desc = could not find container \"046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466\": container with ID starting with 046f80f5b0fad18a10c492a321f9805a18f21783cbf0706dca183757afea5466 not found: ID does not exist" Dec 09 14:30:02 crc kubenswrapper[4770]: I1209 14:30:02.596502 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" path="/var/lib/kubelet/pods/67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824/volumes" Dec 09 14:30:02 crc kubenswrapper[4770]: I1209 14:30:02.597524 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" path="/var/lib/kubelet/pods/6a25b549-8e55-47e1-ba51-781119aefc25/volumes" Dec 09 14:30:02 crc kubenswrapper[4770]: I1209 14:30:02.880691 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:02 crc kubenswrapper[4770]: I1209 14:30:02.994582 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d34ea339-53b7-4e3d-981b-120b80ad0385-secret-volume\") pod \"d34ea339-53b7-4e3d-981b-120b80ad0385\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " Dec 09 14:30:02 crc kubenswrapper[4770]: I1209 14:30:02.994673 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmzz7\" (UniqueName: \"kubernetes.io/projected/d34ea339-53b7-4e3d-981b-120b80ad0385-kube-api-access-pmzz7\") pod \"d34ea339-53b7-4e3d-981b-120b80ad0385\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " Dec 09 14:30:02 crc kubenswrapper[4770]: I1209 14:30:02.994708 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d34ea339-53b7-4e3d-981b-120b80ad0385-config-volume\") pod \"d34ea339-53b7-4e3d-981b-120b80ad0385\" (UID: \"d34ea339-53b7-4e3d-981b-120b80ad0385\") " Dec 09 14:30:02 crc kubenswrapper[4770]: I1209 14:30:02.996418 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d34ea339-53b7-4e3d-981b-120b80ad0385-config-volume" (OuterVolumeSpecName: "config-volume") pod "d34ea339-53b7-4e3d-981b-120b80ad0385" (UID: "d34ea339-53b7-4e3d-981b-120b80ad0385"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:30:03 crc kubenswrapper[4770]: I1209 14:30:03.000252 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d34ea339-53b7-4e3d-981b-120b80ad0385-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:03 crc kubenswrapper[4770]: I1209 14:30:03.004063 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34ea339-53b7-4e3d-981b-120b80ad0385-kube-api-access-pmzz7" (OuterVolumeSpecName: "kube-api-access-pmzz7") pod "d34ea339-53b7-4e3d-981b-120b80ad0385" (UID: "d34ea339-53b7-4e3d-981b-120b80ad0385"). InnerVolumeSpecName "kube-api-access-pmzz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:03 crc kubenswrapper[4770]: I1209 14:30:03.022426 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34ea339-53b7-4e3d-981b-120b80ad0385-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d34ea339-53b7-4e3d-981b-120b80ad0385" (UID: "d34ea339-53b7-4e3d-981b-120b80ad0385"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:30:03 crc kubenswrapper[4770]: I1209 14:30:03.101486 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d34ea339-53b7-4e3d-981b-120b80ad0385-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:03 crc kubenswrapper[4770]: I1209 14:30:03.101532 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmzz7\" (UniqueName: \"kubernetes.io/projected/d34ea339-53b7-4e3d-981b-120b80ad0385-kube-api-access-pmzz7\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:03 crc kubenswrapper[4770]: I1209 14:30:03.640976 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" Dec 09 14:30:03 crc kubenswrapper[4770]: I1209 14:30:03.641157 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf" event={"ID":"d34ea339-53b7-4e3d-981b-120b80ad0385","Type":"ContainerDied","Data":"1636e4c20aa52ac102a07f0cd243b259bfb8b26850c5e1908861bc1ffe480130"} Dec 09 14:30:03 crc kubenswrapper[4770]: I1209 14:30:03.643995 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1636e4c20aa52ac102a07f0cd243b259bfb8b26850c5e1908861bc1ffe480130" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.475859 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nhncv"] Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.476752 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nhncv" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerName="registry-server" containerID="cri-o://64a10799aab463aa00c4fc0fae7416dda1e30936eb4fb1ac7ee09147a9015fa4" gracePeriod=30 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.484554 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8v5jt"] Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.484861 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8v5jt" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerName="registry-server" containerID="cri-o://a4e876ded02e4e2921f6198c0891662902a5539dba3649757e6adad24452d0ac" gracePeriod=30 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.494799 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-th5m2"] Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.495074 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" containerID="cri-o://7028db6b16120d1223d5ef7aa311cf29109299d7d2d87fbad146e77b7a1cc3a3" gracePeriod=30 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.498775 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqg9c"] Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.499047 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jqg9c" podUID="65767399-1491-44ab-8df8-ce71adea95c3" containerName="registry-server" containerID="cri-o://62ce893bb29b4f0ac34e4ae97792edb449650c17097baf057c1f7c8ce38bbd26" gracePeriod=30 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.508005 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97p4f"] Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.508306 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-97p4f" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerName="registry-server" containerID="cri-o://e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856" gracePeriod=30 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.518609 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t2q8w"] Dec 09 14:30:05 crc kubenswrapper[4770]: E1209 14:30:05.518896 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" containerName="extract-content" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.518915 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" containerName="extract-content" Dec 09 14:30:05 crc kubenswrapper[4770]: E1209 14:30:05.518930 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" containerName="registry-server" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.518937 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" containerName="registry-server" Dec 09 14:30:05 crc kubenswrapper[4770]: E1209 14:30:05.518952 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" containerName="extract-utilities" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.518960 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" containerName="extract-utilities" Dec 09 14:30:05 crc kubenswrapper[4770]: E1209 14:30:05.518972 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerName="extract-utilities" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.518979 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerName="extract-utilities" Dec 09 14:30:05 crc kubenswrapper[4770]: E1209 14:30:05.518998 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34ea339-53b7-4e3d-981b-120b80ad0385" containerName="collect-profiles" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.519005 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34ea339-53b7-4e3d-981b-120b80ad0385" containerName="collect-profiles" Dec 09 14:30:05 crc kubenswrapper[4770]: E1209 14:30:05.519012 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerName="registry-server" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.519017 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerName="registry-server" Dec 09 14:30:05 crc kubenswrapper[4770]: E1209 14:30:05.519027 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerName="extract-content" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.519032 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerName="extract-content" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.519142 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34ea339-53b7-4e3d-981b-120b80ad0385" containerName="collect-profiles" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.519155 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e24b2d-ecd7-4d6b-9dac-cf6ddcdf0824" containerName="registry-server" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.519167 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a25b549-8e55-47e1-ba51-781119aefc25" containerName="registry-server" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.519539 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.535774 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t2q8w"] Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.637291 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28tc\" (UniqueName: \"kubernetes.io/projected/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-kube-api-access-p28tc\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.637517 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.637584 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.659260 4770 generic.go:334] "Generic (PLEG): container finished" podID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerID="a4e876ded02e4e2921f6198c0891662902a5539dba3649757e6adad24452d0ac" exitCode=0 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.659318 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v5jt" event={"ID":"869de9e6-0d73-42f6-bf6a-49cc26a84531","Type":"ContainerDied","Data":"a4e876ded02e4e2921f6198c0891662902a5539dba3649757e6adad24452d0ac"} Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.662197 4770 generic.go:334] "Generic (PLEG): container finished" podID="65767399-1491-44ab-8df8-ce71adea95c3" containerID="62ce893bb29b4f0ac34e4ae97792edb449650c17097baf057c1f7c8ce38bbd26" exitCode=0 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.662267 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqg9c" event={"ID":"65767399-1491-44ab-8df8-ce71adea95c3","Type":"ContainerDied","Data":"62ce893bb29b4f0ac34e4ae97792edb449650c17097baf057c1f7c8ce38bbd26"} Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.664180 4770 generic.go:334] "Generic (PLEG): container finished" podID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerID="64a10799aab463aa00c4fc0fae7416dda1e30936eb4fb1ac7ee09147a9015fa4" exitCode=0 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.664227 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nhncv" event={"ID":"58ab865b-2f32-439d-8e32-db4f8b4a6e2b","Type":"ContainerDied","Data":"64a10799aab463aa00c4fc0fae7416dda1e30936eb4fb1ac7ee09147a9015fa4"} Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.667102 4770 generic.go:334] "Generic (PLEG): container finished" podID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerID="7028db6b16120d1223d5ef7aa311cf29109299d7d2d87fbad146e77b7a1cc3a3" exitCode=0 Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.667135 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" event={"ID":"08d594b0-871f-4f3f-9d64-f14f0773be76","Type":"ContainerDied","Data":"7028db6b16120d1223d5ef7aa311cf29109299d7d2d87fbad146e77b7a1cc3a3"} Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.667162 4770 scope.go:117] "RemoveContainer" containerID="61dd46d8dc8e4106225f5dbf2568106c36247fc2e46766866246d4f184ae031e" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.739425 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p28tc\" (UniqueName: \"kubernetes.io/projected/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-kube-api-access-p28tc\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.739508 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.739543 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.740913 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.758335 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.760820 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28tc\" (UniqueName: \"kubernetes.io/projected/639ac3bd-8610-4f95-98f8-ad53a5c0d1fd-kube-api-access-p28tc\") pod \"marketplace-operator-79b997595-t2q8w\" (UID: \"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.847248 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:05 crc kubenswrapper[4770]: I1209 14:30:05.929600 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.056759 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-catalog-content\") pod \"869de9e6-0d73-42f6-bf6a-49cc26a84531\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.057105 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mk9t\" (UniqueName: \"kubernetes.io/projected/869de9e6-0d73-42f6-bf6a-49cc26a84531-kube-api-access-5mk9t\") pod \"869de9e6-0d73-42f6-bf6a-49cc26a84531\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.057136 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-utilities\") pod \"869de9e6-0d73-42f6-bf6a-49cc26a84531\" (UID: \"869de9e6-0d73-42f6-bf6a-49cc26a84531\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.058126 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-utilities" (OuterVolumeSpecName: "utilities") pod "869de9e6-0d73-42f6-bf6a-49cc26a84531" (UID: "869de9e6-0d73-42f6-bf6a-49cc26a84531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.066857 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869de9e6-0d73-42f6-bf6a-49cc26a84531-kube-api-access-5mk9t" (OuterVolumeSpecName: "kube-api-access-5mk9t") pod "869de9e6-0d73-42f6-bf6a-49cc26a84531" (UID: "869de9e6-0d73-42f6-bf6a-49cc26a84531"). InnerVolumeSpecName "kube-api-access-5mk9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.087206 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.120657 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.139679 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "869de9e6-0d73-42f6-bf6a-49cc26a84531" (UID: "869de9e6-0d73-42f6-bf6a-49cc26a84531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.153480 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.158707 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-trusted-ca\") pod \"08d594b0-871f-4f3f-9d64-f14f0773be76\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.158762 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-utilities\") pod \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.158800 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-catalog-content\") pod \"65767399-1491-44ab-8df8-ce71adea95c3\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.158833 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n989l\" (UniqueName: \"kubernetes.io/projected/65767399-1491-44ab-8df8-ce71adea95c3-kube-api-access-n989l\") pod \"65767399-1491-44ab-8df8-ce71adea95c3\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.158906 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-operator-metrics\") pod \"08d594b0-871f-4f3f-9d64-f14f0773be76\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.158960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-utilities\") pod \"65767399-1491-44ab-8df8-ce71adea95c3\" (UID: \"65767399-1491-44ab-8df8-ce71adea95c3\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.158997 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-catalog-content\") pod \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.159020 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj9ss\" (UniqueName: \"kubernetes.io/projected/08d594b0-871f-4f3f-9d64-f14f0773be76-kube-api-access-rj9ss\") pod \"08d594b0-871f-4f3f-9d64-f14f0773be76\" (UID: \"08d594b0-871f-4f3f-9d64-f14f0773be76\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.159062 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn5v4\" (UniqueName: \"kubernetes.io/projected/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-kube-api-access-wn5v4\") pod \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\" (UID: \"58ab865b-2f32-439d-8e32-db4f8b4a6e2b\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.160220 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mk9t\" (UniqueName: \"kubernetes.io/projected/869de9e6-0d73-42f6-bf6a-49cc26a84531-kube-api-access-5mk9t\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.160246 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.160258 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869de9e6-0d73-42f6-bf6a-49cc26a84531-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.161101 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-utilities" (OuterVolumeSpecName: "utilities") pod "65767399-1491-44ab-8df8-ce71adea95c3" (UID: "65767399-1491-44ab-8df8-ce71adea95c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.162466 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "08d594b0-871f-4f3f-9d64-f14f0773be76" (UID: "08d594b0-871f-4f3f-9d64-f14f0773be76"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.165903 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-kube-api-access-wn5v4" (OuterVolumeSpecName: "kube-api-access-wn5v4") pod "58ab865b-2f32-439d-8e32-db4f8b4a6e2b" (UID: "58ab865b-2f32-439d-8e32-db4f8b4a6e2b"). InnerVolumeSpecName "kube-api-access-wn5v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.166459 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "08d594b0-871f-4f3f-9d64-f14f0773be76" (UID: "08d594b0-871f-4f3f-9d64-f14f0773be76"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.166466 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d594b0-871f-4f3f-9d64-f14f0773be76-kube-api-access-rj9ss" (OuterVolumeSpecName: "kube-api-access-rj9ss") pod "08d594b0-871f-4f3f-9d64-f14f0773be76" (UID: "08d594b0-871f-4f3f-9d64-f14f0773be76"). InnerVolumeSpecName "kube-api-access-rj9ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.169975 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-utilities" (OuterVolumeSpecName: "utilities") pod "58ab865b-2f32-439d-8e32-db4f8b4a6e2b" (UID: "58ab865b-2f32-439d-8e32-db4f8b4a6e2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.183822 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65767399-1491-44ab-8df8-ce71adea95c3-kube-api-access-n989l" (OuterVolumeSpecName: "kube-api-access-n989l") pod "65767399-1491-44ab-8df8-ce71adea95c3" (UID: "65767399-1491-44ab-8df8-ce71adea95c3"). InnerVolumeSpecName "kube-api-access-n989l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.188312 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65767399-1491-44ab-8df8-ce71adea95c3" (UID: "65767399-1491-44ab-8df8-ce71adea95c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.226185 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58ab865b-2f32-439d-8e32-db4f8b4a6e2b" (UID: "58ab865b-2f32-439d-8e32-db4f8b4a6e2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261778 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261813 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261826 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261838 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n989l\" (UniqueName: \"kubernetes.io/projected/65767399-1491-44ab-8df8-ce71adea95c3-kube-api-access-n989l\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261851 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08d594b0-871f-4f3f-9d64-f14f0773be76-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261862 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65767399-1491-44ab-8df8-ce71adea95c3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261872 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261915 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj9ss\" (UniqueName: \"kubernetes.io/projected/08d594b0-871f-4f3f-9d64-f14f0773be76-kube-api-access-rj9ss\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.261928 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn5v4\" (UniqueName: \"kubernetes.io/projected/58ab865b-2f32-439d-8e32-db4f8b4a6e2b-kube-api-access-wn5v4\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.400387 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t2q8w"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.508623 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.586811 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-catalog-content\") pod \"b4d654c7-6c1a-49dc-86b6-d756afafe480\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.599634 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-utilities\") pod \"b4d654c7-6c1a-49dc-86b6-d756afafe480\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.599669 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8jbf\" (UniqueName: \"kubernetes.io/projected/b4d654c7-6c1a-49dc-86b6-d756afafe480-kube-api-access-v8jbf\") pod \"b4d654c7-6c1a-49dc-86b6-d756afafe480\" (UID: \"b4d654c7-6c1a-49dc-86b6-d756afafe480\") " Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.602618 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-utilities" (OuterVolumeSpecName: "utilities") pod "b4d654c7-6c1a-49dc-86b6-d756afafe480" (UID: "b4d654c7-6c1a-49dc-86b6-d756afafe480"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.606953 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d654c7-6c1a-49dc-86b6-d756afafe480-kube-api-access-v8jbf" (OuterVolumeSpecName: "kube-api-access-v8jbf") pod "b4d654c7-6c1a-49dc-86b6-d756afafe480" (UID: "b4d654c7-6c1a-49dc-86b6-d756afafe480"). InnerVolumeSpecName "kube-api-access-v8jbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.673485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" event={"ID":"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd","Type":"ContainerStarted","Data":"1ae83d02880663c7c2d519673b72dc471719ea1fdeb5adc0e4a4860f8b9c042b"} Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.673549 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" event={"ID":"639ac3bd-8610-4f95-98f8-ad53a5c0d1fd","Type":"ContainerStarted","Data":"6f2d2a7aca1dacbacc213580194881602af5cb503f8816787c5d34be5a0fa6b2"} Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.673937 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.674913 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-t2q8w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.674953 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" podUID="639ac3bd-8610-4f95-98f8-ad53a5c0d1fd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.685069 4770 generic.go:334] "Generic (PLEG): container finished" podID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerID="e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856" exitCode=0 Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.685166 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97p4f" event={"ID":"b4d654c7-6c1a-49dc-86b6-d756afafe480","Type":"ContainerDied","Data":"e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856"} Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.685197 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97p4f" event={"ID":"b4d654c7-6c1a-49dc-86b6-d756afafe480","Type":"ContainerDied","Data":"13f73e016bb5f2e79f759922ba1f3c3a99400119916a4c08bc3581228bbb85b6"} Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.685217 4770 scope.go:117] "RemoveContainer" containerID="e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.686499 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97p4f" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.690655 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" podStartSLOduration=1.6906414 podStartE2EDuration="1.6906414s" podCreationTimestamp="2025-12-09 14:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:30:06.688445269 +0000 UTC m=+438.584647405" watchObservedRunningTime="2025-12-09 14:30:06.6906414 +0000 UTC m=+438.586843546" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.695066 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqg9c" event={"ID":"65767399-1491-44ab-8df8-ce71adea95c3","Type":"ContainerDied","Data":"8affefca770dea0fa5c24ddd7801b9b6c94fe29538b20a79ebf538bdd3cd35c7"} Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.695101 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqg9c" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.698270 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nhncv" event={"ID":"58ab865b-2f32-439d-8e32-db4f8b4a6e2b","Type":"ContainerDied","Data":"31f2e40aebb073e161a37d140db14f3a90d1d4c935ee0943f4adc76a60e3f72a"} Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.698529 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nhncv" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.701052 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.701086 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8jbf\" (UniqueName: \"kubernetes.io/projected/b4d654c7-6c1a-49dc-86b6-d756afafe480-kube-api-access-v8jbf\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.705504 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" event={"ID":"08d594b0-871f-4f3f-9d64-f14f0773be76","Type":"ContainerDied","Data":"561c046d32b3fe6469715b5c3c106baa21398dbcc401e9b885cf543e9f71b9c0"} Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.705609 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-th5m2" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.706564 4770 scope.go:117] "RemoveContainer" containerID="8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.710537 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v5jt" event={"ID":"869de9e6-0d73-42f6-bf6a-49cc26a84531","Type":"ContainerDied","Data":"8ad845283511bed07f96a688f028d60d9d1a42863e5e350f9e13df4868307c94"} Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.710677 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8v5jt" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.723558 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4d654c7-6c1a-49dc-86b6-d756afafe480" (UID: "b4d654c7-6c1a-49dc-86b6-d756afafe480"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.737008 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nhncv"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.740964 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nhncv"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.745345 4770 scope.go:117] "RemoveContainer" containerID="04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.746420 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqg9c"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.753857 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqg9c"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.766781 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-th5m2"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.767059 4770 scope.go:117] "RemoveContainer" containerID="e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856" Dec 09 14:30:06 crc kubenswrapper[4770]: E1209 14:30:06.767650 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856\": container with ID starting with e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856 not found: ID does not exist" containerID="e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.767694 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856"} err="failed to get container status \"e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856\": rpc error: code = NotFound desc = could not find container \"e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856\": container with ID starting with e415c6b8036c411d2c9a84cf0d7fc95b583fd9a05623fa0b65230061ceaac856 not found: ID does not exist" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.767717 4770 scope.go:117] "RemoveContainer" containerID="8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f" Dec 09 14:30:06 crc kubenswrapper[4770]: E1209 14:30:06.768230 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f\": container with ID starting with 8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f not found: ID does not exist" containerID="8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.768256 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f"} err="failed to get container status \"8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f\": rpc error: code = NotFound desc = could not find container \"8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f\": container with ID starting with 8f956ba28e57756bead084168a355009bae8d556268015abb31056136849bc6f not found: ID does not exist" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.768281 4770 scope.go:117] "RemoveContainer" containerID="04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd" Dec 09 14:30:06 crc kubenswrapper[4770]: E1209 14:30:06.769527 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd\": container with ID starting with 04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd not found: ID does not exist" containerID="04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.769556 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd"} err="failed to get container status \"04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd\": rpc error: code = NotFound desc = could not find container \"04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd\": container with ID starting with 04d9fb50e20d1e62d16658fc51c958387e1ac0aa1a6c44d3d2df612ca14146cd not found: ID does not exist" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.769573 4770 scope.go:117] "RemoveContainer" containerID="62ce893bb29b4f0ac34e4ae97792edb449650c17097baf057c1f7c8ce38bbd26" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.772806 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-th5m2"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.776064 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8v5jt"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.779090 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8v5jt"] Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.781828 4770 scope.go:117] "RemoveContainer" containerID="c55a7efaa67cdd00f1e3ebc0c22d3a6c9ed90387d12b5f5b76c5513c09ea6a2b" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.794943 4770 scope.go:117] "RemoveContainer" containerID="cda69458c4e03cdc464d21e3b4210c10a756556fbc4d341b17b223c5e86c7403" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.803029 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d654c7-6c1a-49dc-86b6-d756afafe480-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.814407 4770 scope.go:117] "RemoveContainer" containerID="64a10799aab463aa00c4fc0fae7416dda1e30936eb4fb1ac7ee09147a9015fa4" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.835812 4770 scope.go:117] "RemoveContainer" containerID="c504ef2861bf3cdef074f443a16692110d2d4723100141852c4c5506753b4184" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.848452 4770 scope.go:117] "RemoveContainer" containerID="2ad23a44bded63e3ac6ec8b90dc363b6e694dc42f7e875a53aa9063ff0a37ce6" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.864978 4770 scope.go:117] "RemoveContainer" containerID="7028db6b16120d1223d5ef7aa311cf29109299d7d2d87fbad146e77b7a1cc3a3" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.881633 4770 scope.go:117] "RemoveContainer" containerID="a4e876ded02e4e2921f6198c0891662902a5539dba3649757e6adad24452d0ac" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.913468 4770 scope.go:117] "RemoveContainer" containerID="d3ce2bf3cc8bf701ea249d77b862786c732369dc5d73f139f6f7f8bbf8c19b70" Dec 09 14:30:06 crc kubenswrapper[4770]: I1209 14:30:06.931017 4770 scope.go:117] "RemoveContainer" containerID="69ccdcb76bed81b841654566c692e460563751544c96fe51dd26d406730614b6" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.010537 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97p4f"] Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.016001 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-97p4f"] Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.193661 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7m4fp"] Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.193900 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerName="extract-content" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.193913 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerName="extract-content" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.193926 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.193932 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.193941 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.193947 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.193955 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65767399-1491-44ab-8df8-ce71adea95c3" containerName="extract-content" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.193960 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="65767399-1491-44ab-8df8-ce71adea95c3" containerName="extract-content" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.193968 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerName="extract-content" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.193974 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerName="extract-content" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.193982 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerName="extract-utilities" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.193988 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerName="extract-utilities" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.193995 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerName="extract-content" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194000 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerName="extract-content" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.194009 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerName="extract-utilities" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194015 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerName="extract-utilities" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.194024 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65767399-1491-44ab-8df8-ce71adea95c3" containerName="extract-utilities" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194030 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="65767399-1491-44ab-8df8-ce71adea95c3" containerName="extract-utilities" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.194038 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194044 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.194055 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerName="extract-utilities" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194060 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerName="extract-utilities" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.194067 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194073 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.194081 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65767399-1491-44ab-8df8-ce71adea95c3" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194086 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="65767399-1491-44ab-8df8-ce71adea95c3" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194163 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194175 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194186 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194193 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194199 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="65767399-1491-44ab-8df8-ce71adea95c3" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194209 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" containerName="registry-server" Dec 09 14:30:07 crc kubenswrapper[4770]: E1209 14:30:07.194287 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194294 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" containerName="marketplace-operator" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.194936 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.198824 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.208007 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7m4fp"] Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.309349 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrhkv\" (UniqueName: \"kubernetes.io/projected/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-kube-api-access-hrhkv\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.309454 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-utilities\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.309514 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-catalog-content\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.410243 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrhkv\" (UniqueName: \"kubernetes.io/projected/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-kube-api-access-hrhkv\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.410316 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-utilities\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.410361 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-catalog-content\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.410920 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-catalog-content\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.411019 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-utilities\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.431708 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrhkv\" (UniqueName: \"kubernetes.io/projected/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-kube-api-access-hrhkv\") pod \"certified-operators-7m4fp\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.510578 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.730560 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" Dec 09 14:30:07 crc kubenswrapper[4770]: I1209 14:30:07.947329 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7m4fp"] Dec 09 14:30:07 crc kubenswrapper[4770]: W1209 14:30:07.952197 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d57bff_0a2d_4dc2_a9d3_a678f6030fe3.slice/crio-5b279de969b9ae3db827def8804d72534ff37de5608a30e7639efca123d038ca WatchSource:0}: Error finding container 5b279de969b9ae3db827def8804d72534ff37de5608a30e7639efca123d038ca: Status 404 returned error can't find the container with id 5b279de969b9ae3db827def8804d72534ff37de5608a30e7639efca123d038ca Dec 09 14:30:08 crc kubenswrapper[4770]: I1209 14:30:08.594099 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d594b0-871f-4f3f-9d64-f14f0773be76" path="/var/lib/kubelet/pods/08d594b0-871f-4f3f-9d64-f14f0773be76/volumes" Dec 09 14:30:08 crc kubenswrapper[4770]: I1209 14:30:08.595228 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ab865b-2f32-439d-8e32-db4f8b4a6e2b" path="/var/lib/kubelet/pods/58ab865b-2f32-439d-8e32-db4f8b4a6e2b/volumes" Dec 09 14:30:08 crc kubenswrapper[4770]: I1209 14:30:08.596016 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65767399-1491-44ab-8df8-ce71adea95c3" path="/var/lib/kubelet/pods/65767399-1491-44ab-8df8-ce71adea95c3/volumes" Dec 09 14:30:08 crc kubenswrapper[4770]: I1209 14:30:08.597266 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869de9e6-0d73-42f6-bf6a-49cc26a84531" path="/var/lib/kubelet/pods/869de9e6-0d73-42f6-bf6a-49cc26a84531/volumes" Dec 09 14:30:08 crc kubenswrapper[4770]: I1209 14:30:08.597996 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d654c7-6c1a-49dc-86b6-d756afafe480" path="/var/lib/kubelet/pods/b4d654c7-6c1a-49dc-86b6-d756afafe480/volumes" Dec 09 14:30:08 crc kubenswrapper[4770]: I1209 14:30:08.730922 4770 generic.go:334] "Generic (PLEG): container finished" podID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerID="8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0" exitCode=0 Dec 09 14:30:08 crc kubenswrapper[4770]: I1209 14:30:08.731022 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m4fp" event={"ID":"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3","Type":"ContainerDied","Data":"8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0"} Dec 09 14:30:08 crc kubenswrapper[4770]: I1209 14:30:08.731065 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m4fp" event={"ID":"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3","Type":"ContainerStarted","Data":"5b279de969b9ae3db827def8804d72534ff37de5608a30e7639efca123d038ca"} Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.001320 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r7scr"] Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.002820 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.008074 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7scr"] Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.008125 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.137425 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56w7q\" (UniqueName: \"kubernetes.io/projected/133b114f-fd6f-429b-8d37-4ac8e0a48730-kube-api-access-56w7q\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.137520 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133b114f-fd6f-429b-8d37-4ac8e0a48730-utilities\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.137612 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133b114f-fd6f-429b-8d37-4ac8e0a48730-catalog-content\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.238812 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133b114f-fd6f-429b-8d37-4ac8e0a48730-catalog-content\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.239302 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133b114f-fd6f-429b-8d37-4ac8e0a48730-catalog-content\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.239797 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56w7q\" (UniqueName: \"kubernetes.io/projected/133b114f-fd6f-429b-8d37-4ac8e0a48730-kube-api-access-56w7q\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.239840 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133b114f-fd6f-429b-8d37-4ac8e0a48730-utilities\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.240112 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133b114f-fd6f-429b-8d37-4ac8e0a48730-utilities\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.276438 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56w7q\" (UniqueName: \"kubernetes.io/projected/133b114f-fd6f-429b-8d37-4ac8e0a48730-kube-api-access-56w7q\") pod \"redhat-marketplace-r7scr\" (UID: \"133b114f-fd6f-429b-8d37-4ac8e0a48730\") " pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.323355 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.590793 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-48dxs"] Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.593206 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.596020 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.603835 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48dxs"] Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.646373 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-utilities\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.646436 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-catalog-content\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.646491 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sllmh\" (UniqueName: \"kubernetes.io/projected/410c8f09-8472-4bb2-a017-24a1b2e9d6af-kube-api-access-sllmh\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.737942 4770 generic.go:334] "Generic (PLEG): container finished" podID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerID="1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd" exitCode=0 Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.738035 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m4fp" event={"ID":"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3","Type":"ContainerDied","Data":"1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd"} Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.747398 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-utilities\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.747459 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-catalog-content\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.747528 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sllmh\" (UniqueName: \"kubernetes.io/projected/410c8f09-8472-4bb2-a017-24a1b2e9d6af-kube-api-access-sllmh\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.748256 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-utilities\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.748285 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-catalog-content\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.757356 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7scr"] Dec 09 14:30:09 crc kubenswrapper[4770]: W1209 14:30:09.770424 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod133b114f_fd6f_429b_8d37_4ac8e0a48730.slice/crio-27a77c7250a43293f85ff18b70b47b1609592d14b1524facdddc41e490f6d95a WatchSource:0}: Error finding container 27a77c7250a43293f85ff18b70b47b1609592d14b1524facdddc41e490f6d95a: Status 404 returned error can't find the container with id 27a77c7250a43293f85ff18b70b47b1609592d14b1524facdddc41e490f6d95a Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.783477 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sllmh\" (UniqueName: \"kubernetes.io/projected/410c8f09-8472-4bb2-a017-24a1b2e9d6af-kube-api-access-sllmh\") pod \"redhat-operators-48dxs\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:09 crc kubenswrapper[4770]: I1209 14:30:09.952698 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.351010 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48dxs"] Dec 09 14:30:10 crc kubenswrapper[4770]: W1209 14:30:10.360019 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod410c8f09_8472_4bb2_a017_24a1b2e9d6af.slice/crio-9c78fdcea2eab0b1098342a49af8a7fecf09632a7483ed0b5ed1687621e6d1aa WatchSource:0}: Error finding container 9c78fdcea2eab0b1098342a49af8a7fecf09632a7483ed0b5ed1687621e6d1aa: Status 404 returned error can't find the container with id 9c78fdcea2eab0b1098342a49af8a7fecf09632a7483ed0b5ed1687621e6d1aa Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.744416 4770 generic.go:334] "Generic (PLEG): container finished" podID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerID="d6e00117e78a4c74a0e2452705af6de840a8915564cc001247a0aae3cb250693" exitCode=0 Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.744493 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48dxs" event={"ID":"410c8f09-8472-4bb2-a017-24a1b2e9d6af","Type":"ContainerDied","Data":"d6e00117e78a4c74a0e2452705af6de840a8915564cc001247a0aae3cb250693"} Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.745030 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48dxs" event={"ID":"410c8f09-8472-4bb2-a017-24a1b2e9d6af","Type":"ContainerStarted","Data":"9c78fdcea2eab0b1098342a49af8a7fecf09632a7483ed0b5ed1687621e6d1aa"} Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.749344 4770 generic.go:334] "Generic (PLEG): container finished" podID="133b114f-fd6f-429b-8d37-4ac8e0a48730" containerID="cd40a20fe4fac1304c515beac7d53a2786ff56bb72f712701d4dc462aa3e9e71" exitCode=0 Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.749411 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7scr" event={"ID":"133b114f-fd6f-429b-8d37-4ac8e0a48730","Type":"ContainerDied","Data":"cd40a20fe4fac1304c515beac7d53a2786ff56bb72f712701d4dc462aa3e9e71"} Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.749511 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7scr" event={"ID":"133b114f-fd6f-429b-8d37-4ac8e0a48730","Type":"ContainerStarted","Data":"27a77c7250a43293f85ff18b70b47b1609592d14b1524facdddc41e490f6d95a"} Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.753873 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m4fp" event={"ID":"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3","Type":"ContainerStarted","Data":"8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a"} Dec 09 14:30:10 crc kubenswrapper[4770]: I1209 14:30:10.810788 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7m4fp" podStartSLOduration=2.406328492 podStartE2EDuration="3.810770087s" podCreationTimestamp="2025-12-09 14:30:07 +0000 UTC" firstStartedPulling="2025-12-09 14:30:08.732195566 +0000 UTC m=+440.628397702" lastFinishedPulling="2025-12-09 14:30:10.136637161 +0000 UTC m=+442.032839297" observedRunningTime="2025-12-09 14:30:10.807836577 +0000 UTC m=+442.704038723" watchObservedRunningTime="2025-12-09 14:30:10.810770087 +0000 UTC m=+442.706972233" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.391616 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wzq58"] Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.393660 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.397014 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.405064 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wzq58"] Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.470121 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-catalog-content\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.470288 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtr6\" (UniqueName: \"kubernetes.io/projected/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-kube-api-access-hxtr6\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.470330 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-utilities\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.580999 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-catalog-content\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.581199 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxtr6\" (UniqueName: \"kubernetes.io/projected/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-kube-api-access-hxtr6\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.581235 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-utilities\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.581673 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-catalog-content\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.581754 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-utilities\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.600602 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxtr6\" (UniqueName: \"kubernetes.io/projected/1a59e82d-89cd-49ea-a28e-9ce9a16ea47b-kube-api-access-hxtr6\") pod \"community-operators-wzq58\" (UID: \"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b\") " pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.723183 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.772362 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48dxs" event={"ID":"410c8f09-8472-4bb2-a017-24a1b2e9d6af","Type":"ContainerStarted","Data":"2c4c48e59cd26453e7785b1412e63154cfbff338f00d4df6c999445cec0b68fc"} Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.777358 4770 generic.go:334] "Generic (PLEG): container finished" podID="133b114f-fd6f-429b-8d37-4ac8e0a48730" containerID="eadc948e0e619881015221d3202ebad480608b704bb47bb0590c58a457934ef8" exitCode=0 Dec 09 14:30:11 crc kubenswrapper[4770]: I1209 14:30:11.777512 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7scr" event={"ID":"133b114f-fd6f-429b-8d37-4ac8e0a48730","Type":"ContainerDied","Data":"eadc948e0e619881015221d3202ebad480608b704bb47bb0590c58a457934ef8"} Dec 09 14:30:12 crc kubenswrapper[4770]: I1209 14:30:12.210996 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wzq58"] Dec 09 14:30:12 crc kubenswrapper[4770]: I1209 14:30:12.785768 4770 generic.go:334] "Generic (PLEG): container finished" podID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerID="2c4c48e59cd26453e7785b1412e63154cfbff338f00d4df6c999445cec0b68fc" exitCode=0 Dec 09 14:30:12 crc kubenswrapper[4770]: I1209 14:30:12.785975 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48dxs" event={"ID":"410c8f09-8472-4bb2-a017-24a1b2e9d6af","Type":"ContainerDied","Data":"2c4c48e59cd26453e7785b1412e63154cfbff338f00d4df6c999445cec0b68fc"} Dec 09 14:30:12 crc kubenswrapper[4770]: I1209 14:30:12.789152 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7scr" event={"ID":"133b114f-fd6f-429b-8d37-4ac8e0a48730","Type":"ContainerStarted","Data":"4034e21621dffdfea1f2e37c335fdc09b7a81b7f2e364664404313cf38a234f2"} Dec 09 14:30:12 crc kubenswrapper[4770]: I1209 14:30:12.792187 4770 generic.go:334] "Generic (PLEG): container finished" podID="1a59e82d-89cd-49ea-a28e-9ce9a16ea47b" containerID="0c72a87a8424bd3eb10c08ac9e5c8552ee23299d491863f157afd63d748b5fa7" exitCode=0 Dec 09 14:30:12 crc kubenswrapper[4770]: I1209 14:30:12.792235 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzq58" event={"ID":"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b","Type":"ContainerDied","Data":"0c72a87a8424bd3eb10c08ac9e5c8552ee23299d491863f157afd63d748b5fa7"} Dec 09 14:30:12 crc kubenswrapper[4770]: I1209 14:30:12.792266 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzq58" event={"ID":"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b","Type":"ContainerStarted","Data":"f82d810bcbb4b6003f6de69786c7e4511aef3dbd17b33fa267e55fd71994ed9b"} Dec 09 14:30:12 crc kubenswrapper[4770]: I1209 14:30:12.831255 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r7scr" podStartSLOduration=3.250048278 podStartE2EDuration="4.831237197s" podCreationTimestamp="2025-12-09 14:30:08 +0000 UTC" firstStartedPulling="2025-12-09 14:30:10.750588298 +0000 UTC m=+442.646790444" lastFinishedPulling="2025-12-09 14:30:12.331777227 +0000 UTC m=+444.227979363" observedRunningTime="2025-12-09 14:30:12.828137651 +0000 UTC m=+444.724339787" watchObservedRunningTime="2025-12-09 14:30:12.831237197 +0000 UTC m=+444.727439333" Dec 09 14:30:13 crc kubenswrapper[4770]: I1209 14:30:13.807697 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48dxs" event={"ID":"410c8f09-8472-4bb2-a017-24a1b2e9d6af","Type":"ContainerStarted","Data":"a02234cc9d2949db854ef0e8b65e0855eaab37710ceefa903dc5d10840cb531b"} Dec 09 14:30:13 crc kubenswrapper[4770]: I1209 14:30:13.814791 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzq58" event={"ID":"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b","Type":"ContainerStarted","Data":"53801c63d135beb97c3779279eb761bf2b9dbff812b73732117c207160a7e984"} Dec 09 14:30:13 crc kubenswrapper[4770]: I1209 14:30:13.848760 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-48dxs" podStartSLOduration=2.437004553 podStartE2EDuration="4.848721095s" podCreationTimestamp="2025-12-09 14:30:09 +0000 UTC" firstStartedPulling="2025-12-09 14:30:10.748561023 +0000 UTC m=+442.644763169" lastFinishedPulling="2025-12-09 14:30:13.160277575 +0000 UTC m=+445.056479711" observedRunningTime="2025-12-09 14:30:13.844262133 +0000 UTC m=+445.740464269" watchObservedRunningTime="2025-12-09 14:30:13.848721095 +0000 UTC m=+445.744923231" Dec 09 14:30:14 crc kubenswrapper[4770]: I1209 14:30:14.818475 4770 generic.go:334] "Generic (PLEG): container finished" podID="1a59e82d-89cd-49ea-a28e-9ce9a16ea47b" containerID="53801c63d135beb97c3779279eb761bf2b9dbff812b73732117c207160a7e984" exitCode=0 Dec 09 14:30:14 crc kubenswrapper[4770]: I1209 14:30:14.818587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzq58" event={"ID":"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b","Type":"ContainerDied","Data":"53801c63d135beb97c3779279eb761bf2b9dbff812b73732117c207160a7e984"} Dec 09 14:30:16 crc kubenswrapper[4770]: I1209 14:30:16.835336 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzq58" event={"ID":"1a59e82d-89cd-49ea-a28e-9ce9a16ea47b","Type":"ContainerStarted","Data":"f0ed19a46605bbdbed0f1b6a3def7104c2c97dcf01d511e99075bb140f643897"} Dec 09 14:30:16 crc kubenswrapper[4770]: I1209 14:30:16.857466 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wzq58" podStartSLOduration=3.067940614 podStartE2EDuration="5.857444661s" podCreationTimestamp="2025-12-09 14:30:11 +0000 UTC" firstStartedPulling="2025-12-09 14:30:12.793253076 +0000 UTC m=+444.689455212" lastFinishedPulling="2025-12-09 14:30:15.582757123 +0000 UTC m=+447.478959259" observedRunningTime="2025-12-09 14:30:16.855702583 +0000 UTC m=+448.751904719" watchObservedRunningTime="2025-12-09 14:30:16.857444661 +0000 UTC m=+448.753646797" Dec 09 14:30:16 crc kubenswrapper[4770]: I1209 14:30:16.944319 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" podUID="57ed857f-e806-40ad-bd78-4aecbfc24699" containerName="registry" containerID="cri-o://356ba5c73e740389aa3f9b907d7384040deee6880f93f3a2daaa414523f3dae9" gracePeriod=30 Dec 09 14:30:17 crc kubenswrapper[4770]: I1209 14:30:17.511717 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:17 crc kubenswrapper[4770]: I1209 14:30:17.513236 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:17 crc kubenswrapper[4770]: I1209 14:30:17.565589 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:17 crc kubenswrapper[4770]: I1209 14:30:17.841858 4770 generic.go:334] "Generic (PLEG): container finished" podID="57ed857f-e806-40ad-bd78-4aecbfc24699" containerID="356ba5c73e740389aa3f9b907d7384040deee6880f93f3a2daaa414523f3dae9" exitCode=0 Dec 09 14:30:17 crc kubenswrapper[4770]: I1209 14:30:17.841974 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" event={"ID":"57ed857f-e806-40ad-bd78-4aecbfc24699","Type":"ContainerDied","Data":"356ba5c73e740389aa3f9b907d7384040deee6880f93f3a2daaa414523f3dae9"} Dec 09 14:30:17 crc kubenswrapper[4770]: I1209 14:30:17.888490 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 14:30:19 crc kubenswrapper[4770]: I1209 14:30:19.324226 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:19 crc kubenswrapper[4770]: I1209 14:30:19.324656 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:19 crc kubenswrapper[4770]: I1209 14:30:19.377909 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:19 crc kubenswrapper[4770]: I1209 14:30:19.940029 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r7scr" Dec 09 14:30:19 crc kubenswrapper[4770]: I1209 14:30:19.953379 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:19 crc kubenswrapper[4770]: I1209 14:30:19.953447 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:19 crc kubenswrapper[4770]: I1209 14:30:19.992527 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.108786 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.227966 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-tls\") pod \"57ed857f-e806-40ad-bd78-4aecbfc24699\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.228022 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2br6\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-kube-api-access-w2br6\") pod \"57ed857f-e806-40ad-bd78-4aecbfc24699\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.228054 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/57ed857f-e806-40ad-bd78-4aecbfc24699-ca-trust-extracted\") pod \"57ed857f-e806-40ad-bd78-4aecbfc24699\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.228940 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"57ed857f-e806-40ad-bd78-4aecbfc24699\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.229034 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/57ed857f-e806-40ad-bd78-4aecbfc24699-installation-pull-secrets\") pod \"57ed857f-e806-40ad-bd78-4aecbfc24699\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.229075 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-bound-sa-token\") pod \"57ed857f-e806-40ad-bd78-4aecbfc24699\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.229149 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-certificates\") pod \"57ed857f-e806-40ad-bd78-4aecbfc24699\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.229269 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-trusted-ca\") pod \"57ed857f-e806-40ad-bd78-4aecbfc24699\" (UID: \"57ed857f-e806-40ad-bd78-4aecbfc24699\") " Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.233129 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "57ed857f-e806-40ad-bd78-4aecbfc24699" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.233510 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "57ed857f-e806-40ad-bd78-4aecbfc24699" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.237741 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "57ed857f-e806-40ad-bd78-4aecbfc24699" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.245248 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-kube-api-access-w2br6" (OuterVolumeSpecName: "kube-api-access-w2br6") pod "57ed857f-e806-40ad-bd78-4aecbfc24699" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699"). InnerVolumeSpecName "kube-api-access-w2br6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.250574 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "57ed857f-e806-40ad-bd78-4aecbfc24699" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.254691 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "57ed857f-e806-40ad-bd78-4aecbfc24699" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.264187 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57ed857f-e806-40ad-bd78-4aecbfc24699-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "57ed857f-e806-40ad-bd78-4aecbfc24699" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.275880 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57ed857f-e806-40ad-bd78-4aecbfc24699-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "57ed857f-e806-40ad-bd78-4aecbfc24699" (UID: "57ed857f-e806-40ad-bd78-4aecbfc24699"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.336324 4770 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/57ed857f-e806-40ad-bd78-4aecbfc24699-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.336366 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.336378 4770 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.336390 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57ed857f-e806-40ad-bd78-4aecbfc24699-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.336402 4770 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.336412 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2br6\" (UniqueName: \"kubernetes.io/projected/57ed857f-e806-40ad-bd78-4aecbfc24699-kube-api-access-w2br6\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.336422 4770 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/57ed857f-e806-40ad-bd78-4aecbfc24699-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.861426 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" event={"ID":"57ed857f-e806-40ad-bd78-4aecbfc24699","Type":"ContainerDied","Data":"c4179adf33eac3b58eb20d18662067c80e1aa8b3f2359d70f063e832474bfb0d"} Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.861479 4770 scope.go:117] "RemoveContainer" containerID="356ba5c73e740389aa3f9b907d7384040deee6880f93f3a2daaa414523f3dae9" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.861571 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.881858 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ztlqj"] Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.888123 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ztlqj"] Dec 09 14:30:20 crc kubenswrapper[4770]: I1209 14:30:20.916611 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 14:30:21 crc kubenswrapper[4770]: I1209 14:30:21.723973 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:21 crc kubenswrapper[4770]: I1209 14:30:21.724029 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:21 crc kubenswrapper[4770]: I1209 14:30:21.764362 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:21 crc kubenswrapper[4770]: I1209 14:30:21.910785 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wzq58" Dec 09 14:30:22 crc kubenswrapper[4770]: I1209 14:30:22.593866 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ed857f-e806-40ad-bd78-4aecbfc24699" path="/var/lib/kubelet/pods/57ed857f-e806-40ad-bd78-4aecbfc24699/volumes" Dec 09 14:30:25 crc kubenswrapper[4770]: I1209 14:30:25.020855 4770 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-ztlqj container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.13:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:30:25 crc kubenswrapper[4770]: I1209 14:30:25.020917 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-ztlqj" podUID="57ed857f-e806-40ad-bd78-4aecbfc24699" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.13:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:32:14 crc kubenswrapper[4770]: I1209 14:32:14.243556 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:32:14 crc kubenswrapper[4770]: I1209 14:32:14.244090 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:32:44 crc kubenswrapper[4770]: I1209 14:32:44.243414 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:32:44 crc kubenswrapper[4770]: I1209 14:32:44.244319 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:33:14 crc kubenswrapper[4770]: I1209 14:33:14.243870 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:33:14 crc kubenswrapper[4770]: I1209 14:33:14.244456 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:33:14 crc kubenswrapper[4770]: I1209 14:33:14.244504 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:33:14 crc kubenswrapper[4770]: I1209 14:33:14.245138 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4143fcf6193bb8d37b6aa9f74630ea967df19039b5e904f79a07122fe7fe763"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:33:14 crc kubenswrapper[4770]: I1209 14:33:14.245219 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://c4143fcf6193bb8d37b6aa9f74630ea967df19039b5e904f79a07122fe7fe763" gracePeriod=600 Dec 09 14:33:15 crc kubenswrapper[4770]: I1209 14:33:15.277194 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="c4143fcf6193bb8d37b6aa9f74630ea967df19039b5e904f79a07122fe7fe763" exitCode=0 Dec 09 14:33:15 crc kubenswrapper[4770]: I1209 14:33:15.277255 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"c4143fcf6193bb8d37b6aa9f74630ea967df19039b5e904f79a07122fe7fe763"} Dec 09 14:33:15 crc kubenswrapper[4770]: I1209 14:33:15.277683 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"7a2d7351a37474ac1217b67bfcec9a9843ac24bf4743146b36e820021a607f34"} Dec 09 14:33:15 crc kubenswrapper[4770]: I1209 14:33:15.277764 4770 scope.go:117] "RemoveContainer" containerID="e8d1eda564365c5c920f110d0bb1f391b787b6130b8ab2f01b19986d8be82924" Dec 09 14:35:14 crc kubenswrapper[4770]: I1209 14:35:14.243683 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:35:14 crc kubenswrapper[4770]: I1209 14:35:14.244676 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:35:26 crc kubenswrapper[4770]: I1209 14:35:26.051038 4770 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.632162 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl"] Dec 09 14:35:36 crc kubenswrapper[4770]: E1209 14:35:36.633003 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ed857f-e806-40ad-bd78-4aecbfc24699" containerName="registry" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.633016 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ed857f-e806-40ad-bd78-4aecbfc24699" containerName="registry" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.633119 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ed857f-e806-40ad-bd78-4aecbfc24699" containerName="registry" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.633896 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.636417 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.645595 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl"] Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.817989 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g7kt\" (UniqueName: \"kubernetes.io/projected/a6fdfcce-b460-4869-acac-7ca04cb0b308-kube-api-access-4g7kt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.818070 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.818115 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.919307 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g7kt\" (UniqueName: \"kubernetes.io/projected/a6fdfcce-b460-4869-acac-7ca04cb0b308-kube-api-access-4g7kt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.919383 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.919430 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.920067 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.920319 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:36 crc kubenswrapper[4770]: I1209 14:35:36.953183 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g7kt\" (UniqueName: \"kubernetes.io/projected/a6fdfcce-b460-4869-acac-7ca04cb0b308-kube-api-access-4g7kt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:37 crc kubenswrapper[4770]: I1209 14:35:37.009398 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:37 crc kubenswrapper[4770]: I1209 14:35:37.261796 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl"] Dec 09 14:35:38 crc kubenswrapper[4770]: I1209 14:35:38.160253 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerID="ce440daab17e8cea3e3f7f5f80ba2408324506aea5579a370b90a609848146cf" exitCode=0 Dec 09 14:35:38 crc kubenswrapper[4770]: I1209 14:35:38.160342 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" event={"ID":"a6fdfcce-b460-4869-acac-7ca04cb0b308","Type":"ContainerDied","Data":"ce440daab17e8cea3e3f7f5f80ba2408324506aea5579a370b90a609848146cf"} Dec 09 14:35:38 crc kubenswrapper[4770]: I1209 14:35:38.160785 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" event={"ID":"a6fdfcce-b460-4869-acac-7ca04cb0b308","Type":"ContainerStarted","Data":"7a3aa9cbc8ecbf6351de8e0ebc526b3653dd8130046cf06f93d9aba31dfc45b8"} Dec 09 14:35:38 crc kubenswrapper[4770]: I1209 14:35:38.162367 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:35:38 crc kubenswrapper[4770]: I1209 14:35:38.986095 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6tvz"] Dec 09 14:35:38 crc kubenswrapper[4770]: I1209 14:35:38.990520 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.004876 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6tvz"] Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.055266 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27lp\" (UniqueName: \"kubernetes.io/projected/cc130d05-d4ba-4206-93ab-7064c5c022ca-kube-api-access-l27lp\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.055305 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-utilities\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.055336 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-catalog-content\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.155857 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l27lp\" (UniqueName: \"kubernetes.io/projected/cc130d05-d4ba-4206-93ab-7064c5c022ca-kube-api-access-l27lp\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.155937 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-utilities\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.155974 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-catalog-content\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.156650 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-catalog-content\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.156948 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-utilities\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.183561 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l27lp\" (UniqueName: \"kubernetes.io/projected/cc130d05-d4ba-4206-93ab-7064c5c022ca-kube-api-access-l27lp\") pod \"redhat-operators-h6tvz\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.354016 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:39 crc kubenswrapper[4770]: I1209 14:35:39.588738 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6tvz"] Dec 09 14:35:39 crc kubenswrapper[4770]: W1209 14:35:39.595315 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc130d05_d4ba_4206_93ab_7064c5c022ca.slice/crio-5e22512493476e22a548cd4c8c16b40475ecd3c5bc7e8dcc4122e22ce6d887b6 WatchSource:0}: Error finding container 5e22512493476e22a548cd4c8c16b40475ecd3c5bc7e8dcc4122e22ce6d887b6: Status 404 returned error can't find the container with id 5e22512493476e22a548cd4c8c16b40475ecd3c5bc7e8dcc4122e22ce6d887b6 Dec 09 14:35:40 crc kubenswrapper[4770]: I1209 14:35:40.175247 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerID="21b5f4fb256aef1dbcafd258108d7b8c0d01d0d196e5bd90ff68d5058b78b4d5" exitCode=0 Dec 09 14:35:40 crc kubenswrapper[4770]: I1209 14:35:40.175310 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" event={"ID":"a6fdfcce-b460-4869-acac-7ca04cb0b308","Type":"ContainerDied","Data":"21b5f4fb256aef1dbcafd258108d7b8c0d01d0d196e5bd90ff68d5058b78b4d5"} Dec 09 14:35:40 crc kubenswrapper[4770]: I1209 14:35:40.178556 4770 generic.go:334] "Generic (PLEG): container finished" podID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerID="2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d" exitCode=0 Dec 09 14:35:40 crc kubenswrapper[4770]: I1209 14:35:40.178617 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tvz" event={"ID":"cc130d05-d4ba-4206-93ab-7064c5c022ca","Type":"ContainerDied","Data":"2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d"} Dec 09 14:35:40 crc kubenswrapper[4770]: I1209 14:35:40.178661 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tvz" event={"ID":"cc130d05-d4ba-4206-93ab-7064c5c022ca","Type":"ContainerStarted","Data":"5e22512493476e22a548cd4c8c16b40475ecd3c5bc7e8dcc4122e22ce6d887b6"} Dec 09 14:35:41 crc kubenswrapper[4770]: I1209 14:35:41.185853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tvz" event={"ID":"cc130d05-d4ba-4206-93ab-7064c5c022ca","Type":"ContainerStarted","Data":"99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a"} Dec 09 14:35:41 crc kubenswrapper[4770]: I1209 14:35:41.191202 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerID="3f91fcf51d8f76839d07e0a5bc5dbfd8b5ac5613535c0c586a54704b62c2ef58" exitCode=0 Dec 09 14:35:41 crc kubenswrapper[4770]: I1209 14:35:41.191251 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" event={"ID":"a6fdfcce-b460-4869-acac-7ca04cb0b308","Type":"ContainerDied","Data":"3f91fcf51d8f76839d07e0a5bc5dbfd8b5ac5613535c0c586a54704b62c2ef58"} Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.199202 4770 generic.go:334] "Generic (PLEG): container finished" podID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerID="99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a" exitCode=0 Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.199318 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tvz" event={"ID":"cc130d05-d4ba-4206-93ab-7064c5c022ca","Type":"ContainerDied","Data":"99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a"} Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.441524 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.597935 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-util\") pod \"a6fdfcce-b460-4869-acac-7ca04cb0b308\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.598005 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g7kt\" (UniqueName: \"kubernetes.io/projected/a6fdfcce-b460-4869-acac-7ca04cb0b308-kube-api-access-4g7kt\") pod \"a6fdfcce-b460-4869-acac-7ca04cb0b308\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.598608 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-bundle\") pod \"a6fdfcce-b460-4869-acac-7ca04cb0b308\" (UID: \"a6fdfcce-b460-4869-acac-7ca04cb0b308\") " Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.602006 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-bundle" (OuterVolumeSpecName: "bundle") pod "a6fdfcce-b460-4869-acac-7ca04cb0b308" (UID: "a6fdfcce-b460-4869-acac-7ca04cb0b308"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.604036 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6fdfcce-b460-4869-acac-7ca04cb0b308-kube-api-access-4g7kt" (OuterVolumeSpecName: "kube-api-access-4g7kt") pod "a6fdfcce-b460-4869-acac-7ca04cb0b308" (UID: "a6fdfcce-b460-4869-acac-7ca04cb0b308"). InnerVolumeSpecName "kube-api-access-4g7kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.615373 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-util" (OuterVolumeSpecName: "util") pod "a6fdfcce-b460-4869-acac-7ca04cb0b308" (UID: "a6fdfcce-b460-4869-acac-7ca04cb0b308"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.700134 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.700165 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g7kt\" (UniqueName: \"kubernetes.io/projected/a6fdfcce-b460-4869-acac-7ca04cb0b308-kube-api-access-4g7kt\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:42 crc kubenswrapper[4770]: I1209 14:35:42.700179 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6fdfcce-b460-4869-acac-7ca04cb0b308-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:43 crc kubenswrapper[4770]: I1209 14:35:43.207284 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tvz" event={"ID":"cc130d05-d4ba-4206-93ab-7064c5c022ca","Type":"ContainerStarted","Data":"76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9"} Dec 09 14:35:43 crc kubenswrapper[4770]: I1209 14:35:43.210110 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" event={"ID":"a6fdfcce-b460-4869-acac-7ca04cb0b308","Type":"ContainerDied","Data":"7a3aa9cbc8ecbf6351de8e0ebc526b3653dd8130046cf06f93d9aba31dfc45b8"} Dec 09 14:35:43 crc kubenswrapper[4770]: I1209 14:35:43.210159 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a3aa9cbc8ecbf6351de8e0ebc526b3653dd8130046cf06f93d9aba31dfc45b8" Dec 09 14:35:43 crc kubenswrapper[4770]: I1209 14:35:43.210342 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl" Dec 09 14:35:43 crc kubenswrapper[4770]: I1209 14:35:43.231259 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6tvz" podStartSLOduration=2.823323444 podStartE2EDuration="5.231225965s" podCreationTimestamp="2025-12-09 14:35:38 +0000 UTC" firstStartedPulling="2025-12-09 14:35:40.186992306 +0000 UTC m=+772.083194452" lastFinishedPulling="2025-12-09 14:35:42.594894817 +0000 UTC m=+774.491096973" observedRunningTime="2025-12-09 14:35:43.225287031 +0000 UTC m=+775.121489177" watchObservedRunningTime="2025-12-09 14:35:43.231225965 +0000 UTC m=+775.127428111" Dec 09 14:35:44 crc kubenswrapper[4770]: I1209 14:35:44.243770 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:35:44 crc kubenswrapper[4770]: I1209 14:35:44.243863 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.761436 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k4btz"] Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.762520 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="nbdb" containerID="cri-o://8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb" gracePeriod=30 Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.762703 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="sbdb" containerID="cri-o://a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698" gracePeriod=30 Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.762676 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kube-rbac-proxy-node" containerID="cri-o://906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041" gracePeriod=30 Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.762717 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovn-acl-logging" containerID="cri-o://76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f" gracePeriod=30 Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.762909 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="northd" containerID="cri-o://a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8" gracePeriod=30 Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.762950 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5" gracePeriod=30 Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.763201 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovn-controller" containerID="cri-o://303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a" gracePeriod=30 Dec 09 14:35:47 crc kubenswrapper[4770]: I1209 14:35:47.806339 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" containerID="cri-o://1ebc9ebb54c0e83d7c8040bfc05ce9ea915130a5e497e2573d722aae810b7ebb" gracePeriod=30 Dec 09 14:35:48 crc kubenswrapper[4770]: I1209 14:35:48.980094 4770 scope.go:117] "RemoveContainer" containerID="7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.356575 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.356634 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.659343 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/2.log" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.660269 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/1.log" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.660339 4770 generic.go:334] "Generic (PLEG): container finished" podID="c38553c5-6cc9-435b-8c52-3262b861d1cf" containerID="56d185a3c0466cb2fca6ba8405177abc87c8aae2c2b0db2307e65712aabe4905" exitCode=2 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.660444 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h5dw2" event={"ID":"c38553c5-6cc9-435b-8c52-3262b861d1cf","Type":"ContainerDied","Data":"56d185a3c0466cb2fca6ba8405177abc87c8aae2c2b0db2307e65712aabe4905"} Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.660525 4770 scope.go:117] "RemoveContainer" containerID="a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.661318 4770 scope.go:117] "RemoveContainer" containerID="56d185a3c0466cb2fca6ba8405177abc87c8aae2c2b0db2307e65712aabe4905" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.672628 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovn-acl-logging/0.log" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.673455 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovn-controller/0.log" Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675436 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="1ebc9ebb54c0e83d7c8040bfc05ce9ea915130a5e497e2573d722aae810b7ebb" exitCode=0 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675474 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698" exitCode=0 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675483 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb" exitCode=0 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675489 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8" exitCode=0 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675496 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5" exitCode=0 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675504 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041" exitCode=0 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675510 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f" exitCode=143 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675518 4770 generic.go:334] "Generic (PLEG): container finished" podID="39aa66d3-1416-4178-a4bc-34179463fd45" containerID="303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a" exitCode=143 Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675540 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"1ebc9ebb54c0e83d7c8040bfc05ce9ea915130a5e497e2573d722aae810b7ebb"} Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698"} Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675579 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb"} Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8"} Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675596 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5"} Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675604 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041"} Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675615 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f"} Dec 09 14:35:49 crc kubenswrapper[4770]: I1209 14:35:49.675624 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a"} Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.228183 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb is running failed: container process not found" containerID="8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.228853 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698 is running failed: container process not found" containerID="a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.232884 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb is running failed: container process not found" containerID="8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.233041 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698 is running failed: container process not found" containerID="a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.247042 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb is running failed: container process not found" containerID="8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.247032 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698 is running failed: container process not found" containerID="a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.247088 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="nbdb" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.247120 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="sbdb" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.310369 4770 scope.go:117] "RemoveContainer" containerID="7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.318745 4770 scope.go:117] "RemoveContainer" containerID="a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.319375 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5\": container with ID starting with a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5 not found: ID does not exist" containerID="a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.319437 4770 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5\": rpc error: code = NotFound desc = could not find container \"a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5\": container with ID starting with a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5 not found: ID does not exist" containerID="a420e13c361922e163d851dfdfb1808a59d2a5814e4f141f64b58754a8917dc5" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.326507 4770 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_ovnkube-controller_ovnkube-node-k4btz_openshift-ovn-kubernetes_39aa66d3-1416-4178-a4bc-34179463fd45_3 in pod sandbox f3697b6905a040b8726353af6682f59f50aa7faa9f4fc517d9227165fdd0cb19 from index: no such id: '7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3'" containerID="7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.326580 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3"} err="rpc error: code = Unknown desc = failed to delete container k8s_ovnkube-controller_ovnkube-node-k4btz_openshift-ovn-kubernetes_39aa66d3-1416-4178-a4bc-34179463fd45_3 in pod sandbox f3697b6905a040b8726353af6682f59f50aa7faa9f4fc517d9227165fdd0cb19 from index: no such id: '7ae149a6641c737c671539dde46ec5e9f1c512eeee04499fbf906d5b2dd9d8f3'" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.390625 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovn-acl-logging/0.log" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.391104 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovn-controller/0.log" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.391669 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.460306 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h6tvz" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="registry-server" probeResult="failure" output=< Dec 09 14:35:50 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Dec 09 14:35:50 crc kubenswrapper[4770]: > Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.489720 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7vpdp"] Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.490362 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerName="extract" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.490428 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerName="extract" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.490501 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovn-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.490564 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovn-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.490624 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovn-acl-logging" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.490690 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovn-acl-logging" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.490793 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerName="pull" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.490852 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerName="pull" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.490902 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.490953 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491003 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="nbdb" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491047 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="nbdb" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491100 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="sbdb" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491151 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="sbdb" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491199 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491306 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491361 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491406 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491453 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="northd" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491506 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="northd" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491554 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491601 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491649 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491700 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491778 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kube-rbac-proxy-node" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491838 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kube-rbac-proxy-node" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491896 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerName="util" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.491949 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerName="util" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.491996 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kubecfg-setup" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492045 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kubecfg-setup" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492231 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovn-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492295 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kube-rbac-proxy-node" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492351 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492405 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492458 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovn-acl-logging" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492523 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6fdfcce-b460-4869-acac-7ca04cb0b308" containerName="extract" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492574 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492626 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="nbdb" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492680 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492752 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="sbdb" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.492831 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="northd" Dec 09 14:35:50 crc kubenswrapper[4770]: E1209 14:35:50.493013 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.493072 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.493212 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.493277 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" containerName="ovnkube-controller" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.495336 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511129 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-config\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511189 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpm77\" (UniqueName: \"kubernetes.io/projected/39aa66d3-1416-4178-a4bc-34179463fd45-kube-api-access-hpm77\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511216 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-slash\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511232 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-node-log\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511267 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-bin\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511285 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-ovn\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511324 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39aa66d3-1416-4178-a4bc-34179463fd45-ovn-node-metrics-cert\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511348 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-netns\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511369 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-env-overrides\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511386 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-log-socket\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511322 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-slash" (OuterVolumeSpecName: "host-slash") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511346 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511356 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-node-log" (OuterVolumeSpecName: "node-log") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511437 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511391 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511410 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-etc-openvswitch\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511473 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-kubelet\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511496 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-netd\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511516 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-ovn-kubernetes\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511511 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-log-socket" (OuterVolumeSpecName: "log-socket") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511548 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-openvswitch\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511549 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511617 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-var-lib-openvswitch\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511484 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511566 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511570 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511590 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511655 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-script-lib\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511676 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511676 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511696 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511711 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-systemd-units\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511755 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511749 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-systemd\") pod \"39aa66d3-1416-4178-a4bc-34179463fd45\" (UID: \"39aa66d3-1416-4178-a4bc-34179463fd45\") " Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511914 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.511962 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512171 4770 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512185 4770 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512194 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512203 4770 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512210 4770 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512218 4770 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512226 4770 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512235 4770 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512245 4770 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512253 4770 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512262 4770 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512271 4770 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512280 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512287 4770 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512294 4770 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512302 4770 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.512419 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.528126 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39aa66d3-1416-4178-a4bc-34179463fd45-kube-api-access-hpm77" (OuterVolumeSpecName: "kube-api-access-hpm77") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "kube-api-access-hpm77". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.531824 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39aa66d3-1416-4178-a4bc-34179463fd45-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.537270 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "39aa66d3-1416-4178-a4bc-34179463fd45" (UID: "39aa66d3-1416-4178-a4bc-34179463fd45"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613157 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-systemd-units\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613214 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-systemd\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613236 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-log-socket\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613254 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-cni-netd\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-kubelet\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613504 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-ovnkube-config\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613571 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-ovn\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613771 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-cni-bin\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613828 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-node-log\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613873 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-run-netns\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613914 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613943 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/702e680a-b624-4197-94c8-58bbf2c63186-ovn-node-metrics-cert\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.613970 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-var-lib-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614192 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-ovnkube-script-lib\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614276 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-env-overrides\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614304 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-slash\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614427 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-run-ovn-kubernetes\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-etc-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614526 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnn8k\" (UniqueName: \"kubernetes.io/projected/702e680a-b624-4197-94c8-58bbf2c63186-kube-api-access-pnn8k\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614603 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614748 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39aa66d3-1416-4178-a4bc-34179463fd45-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614781 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39aa66d3-1416-4178-a4bc-34179463fd45-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614793 4770 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39aa66d3-1416-4178-a4bc-34179463fd45-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.614806 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpm77\" (UniqueName: \"kubernetes.io/projected/39aa66d3-1416-4178-a4bc-34179463fd45-kube-api-access-hpm77\") on node \"crc\" DevicePath \"\"" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.683782 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h5dw2_c38553c5-6cc9-435b-8c52-3262b861d1cf/kube-multus/2.log" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.683896 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h5dw2" event={"ID":"c38553c5-6cc9-435b-8c52-3262b861d1cf","Type":"ContainerStarted","Data":"0168d5be8a63e407d9f994af94d03c58bde6bc7208e9d528f1435847e861598c"} Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.688025 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovn-acl-logging/0.log" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.688488 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k4btz_39aa66d3-1416-4178-a4bc-34179463fd45/ovn-controller/0.log" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.688904 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" event={"ID":"39aa66d3-1416-4178-a4bc-34179463fd45","Type":"ContainerDied","Data":"f3697b6905a040b8726353af6682f59f50aa7faa9f4fc517d9227165fdd0cb19"} Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.688977 4770 scope.go:117] "RemoveContainer" containerID="1ebc9ebb54c0e83d7c8040bfc05ce9ea915130a5e497e2573d722aae810b7ebb" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.688994 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k4btz" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.714870 4770 scope.go:117] "RemoveContainer" containerID="a81e0a38ca0b82c207419bd0b00b369aa4c6388bb5ccbcfcff8b8c91f164c698" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715557 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-ovn\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-cni-bin\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715618 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-node-log\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715637 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-run-netns\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715658 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715675 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/702e680a-b624-4197-94c8-58bbf2c63186-ovn-node-metrics-cert\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715692 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-var-lib-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715716 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-ovnkube-script-lib\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715760 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-slash\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715776 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-env-overrides\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715794 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-run-ovn-kubernetes\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715817 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-etc-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715834 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnn8k\" (UniqueName: \"kubernetes.io/projected/702e680a-b624-4197-94c8-58bbf2c63186-kube-api-access-pnn8k\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715846 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-run-netns\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-var-lib-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715927 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715956 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-ovn\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715979 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-cni-bin\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715977 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.715864 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.716561 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-ovnkube-script-lib\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.716836 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-node-log\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.716894 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-etc-openvswitch\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.716911 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-slash\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.717049 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-run-ovn-kubernetes\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.717391 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-systemd-units\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.717413 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-systemd\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.717430 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-log-socket\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.717446 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-cni-netd\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.717467 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-kubelet\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.717485 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-ovnkube-config\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.717968 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-log-socket\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.718053 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-systemd-units\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.718088 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-run-systemd\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.718123 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-cni-netd\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.718158 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/702e680a-b624-4197-94c8-58bbf2c63186-host-kubelet\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.720049 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-env-overrides\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.720434 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/702e680a-b624-4197-94c8-58bbf2c63186-ovnkube-config\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.754047 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnn8k\" (UniqueName: \"kubernetes.io/projected/702e680a-b624-4197-94c8-58bbf2c63186-kube-api-access-pnn8k\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.759040 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/702e680a-b624-4197-94c8-58bbf2c63186-ovn-node-metrics-cert\") pod \"ovnkube-node-7vpdp\" (UID: \"702e680a-b624-4197-94c8-58bbf2c63186\") " pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.797473 4770 scope.go:117] "RemoveContainer" containerID="8cb33f719a802268d5e42f1c3dbd2c1d1881e2626a85ccce669b011f092a6abb" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.829975 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.834768 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k4btz"] Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.839976 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k4btz"] Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.842751 4770 scope.go:117] "RemoveContainer" containerID="a273a90bab857e47a36a8615137de2b5e97905a629d795f3c51087be778062e8" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.888882 4770 scope.go:117] "RemoveContainer" containerID="aaed924ab6d642e3f2ee09578b38f99c16f9241a584aac6dd7ce265ad94ea6d5" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.925212 4770 scope.go:117] "RemoveContainer" containerID="906801d1e3bd155dbd79d5133774badcd96c943341f9b2b5e536b9c05a787041" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.957515 4770 scope.go:117] "RemoveContainer" containerID="76c2ec34561d58ded8334bae06de377b170883d2e4643024a0f87f1a08085f6f" Dec 09 14:35:50 crc kubenswrapper[4770]: I1209 14:35:50.993306 4770 scope.go:117] "RemoveContainer" containerID="303bce777bff69b82425ee193437bcce288c4b1b2811fd0c19f78b14d98a091a" Dec 09 14:35:51 crc kubenswrapper[4770]: I1209 14:35:51.014849 4770 scope.go:117] "RemoveContainer" containerID="ad17ec276a345be94ef569f0b2337102c81994390015acc5b9e1d2a904dd38d3" Dec 09 14:35:51 crc kubenswrapper[4770]: I1209 14:35:51.698275 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"1bc2d27cffe7e5aba7a6b33061265de949ff5b2396afc93659ea3af6bd9d5708"} Dec 09 14:35:51 crc kubenswrapper[4770]: I1209 14:35:51.698333 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"03a456a5aa7384a2457281cb7388d46147ec2020b1006ad74f76614c379ad256"} Dec 09 14:35:52 crc kubenswrapper[4770]: I1209 14:35:52.597000 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39aa66d3-1416-4178-a4bc-34179463fd45" path="/var/lib/kubelet/pods/39aa66d3-1416-4178-a4bc-34179463fd45/volumes" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.410475 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5"] Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.411933 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.415781 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-j2lbj" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.415792 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.415860 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.562544 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g"] Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.563266 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.566018 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.566178 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-f9hwp" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.566966 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpff\" (UniqueName: \"kubernetes.io/projected/80dad146-0216-45e2-9007-7c42769b1cde-kube-api-access-mtpff\") pod \"obo-prometheus-operator-668cf9dfbb-gtnb5\" (UID: \"80dad146-0216-45e2-9007-7c42769b1cde\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.573715 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg"] Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.574582 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.667896 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46377460-fbee-4d96-99da-d202b1cf4988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg\" (UID: \"46377460-fbee-4d96-99da-d202b1cf4988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.667954 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46377460-fbee-4d96-99da-d202b1cf4988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg\" (UID: \"46377460-fbee-4d96-99da-d202b1cf4988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.668084 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtpff\" (UniqueName: \"kubernetes.io/projected/80dad146-0216-45e2-9007-7c42769b1cde-kube-api-access-mtpff\") pod \"obo-prometheus-operator-668cf9dfbb-gtnb5\" (UID: \"80dad146-0216-45e2-9007-7c42769b1cde\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.668113 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6837196-3529-4d41-ad3a-103cff3d6fa6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g\" (UID: \"e6837196-3529-4d41-ad3a-103cff3d6fa6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.668131 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6837196-3529-4d41-ad3a-103cff3d6fa6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g\" (UID: \"e6837196-3529-4d41-ad3a-103cff3d6fa6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.691100 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtpff\" (UniqueName: \"kubernetes.io/projected/80dad146-0216-45e2-9007-7c42769b1cde-kube-api-access-mtpff\") pod \"obo-prometheus-operator-668cf9dfbb-gtnb5\" (UID: \"80dad146-0216-45e2-9007-7c42769b1cde\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.710193 4770 generic.go:334] "Generic (PLEG): container finished" podID="702e680a-b624-4197-94c8-58bbf2c63186" containerID="1bc2d27cffe7e5aba7a6b33061265de949ff5b2396afc93659ea3af6bd9d5708" exitCode=0 Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.710252 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerDied","Data":"1bc2d27cffe7e5aba7a6b33061265de949ff5b2396afc93659ea3af6bd9d5708"} Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.728117 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.753842 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators_80dad146-0216-45e2-9007-7c42769b1cde_0(3b9e8ead1cb9eb0152c7d561fe3597d72561d085d62fa2ad1d96a97d75565cd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.753931 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators_80dad146-0216-45e2-9007-7c42769b1cde_0(3b9e8ead1cb9eb0152c7d561fe3597d72561d085d62fa2ad1d96a97d75565cd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.753964 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators_80dad146-0216-45e2-9007-7c42769b1cde_0(3b9e8ead1cb9eb0152c7d561fe3597d72561d085d62fa2ad1d96a97d75565cd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.754031 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators(80dad146-0216-45e2-9007-7c42769b1cde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators(80dad146-0216-45e2-9007-7c42769b1cde)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators_80dad146-0216-45e2-9007-7c42769b1cde_0(3b9e8ead1cb9eb0152c7d561fe3597d72561d085d62fa2ad1d96a97d75565cd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" podUID="80dad146-0216-45e2-9007-7c42769b1cde" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.759220 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6zbhb"] Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.760328 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.762434 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.763327 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ttlxg" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.769331 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46377460-fbee-4d96-99da-d202b1cf4988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg\" (UID: \"46377460-fbee-4d96-99da-d202b1cf4988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.769395 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46377460-fbee-4d96-99da-d202b1cf4988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg\" (UID: \"46377460-fbee-4d96-99da-d202b1cf4988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.769440 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6837196-3529-4d41-ad3a-103cff3d6fa6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g\" (UID: \"e6837196-3529-4d41-ad3a-103cff3d6fa6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.769589 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6837196-3529-4d41-ad3a-103cff3d6fa6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g\" (UID: \"e6837196-3529-4d41-ad3a-103cff3d6fa6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.773101 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46377460-fbee-4d96-99da-d202b1cf4988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg\" (UID: \"46377460-fbee-4d96-99da-d202b1cf4988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.773213 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6837196-3529-4d41-ad3a-103cff3d6fa6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g\" (UID: \"e6837196-3529-4d41-ad3a-103cff3d6fa6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.773238 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6837196-3529-4d41-ad3a-103cff3d6fa6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g\" (UID: \"e6837196-3529-4d41-ad3a-103cff3d6fa6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.781231 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46377460-fbee-4d96-99da-d202b1cf4988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg\" (UID: \"46377460-fbee-4d96-99da-d202b1cf4988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.870696 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtbzk\" (UniqueName: \"kubernetes.io/projected/b4b6e7d6-2797-4d98-bf28-e8e458a538e3-kube-api-access-dtbzk\") pod \"observability-operator-d8bb48f5d-6zbhb\" (UID: \"b4b6e7d6-2797-4d98-bf28-e8e458a538e3\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.870790 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4b6e7d6-2797-4d98-bf28-e8e458a538e3-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6zbhb\" (UID: \"b4b6e7d6-2797-4d98-bf28-e8e458a538e3\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.878457 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.890886 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.901522 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators_e6837196-3529-4d41-ad3a-103cff3d6fa6_0(9ba0ab130e5da6b122626cfa5160c0955a76340156fde9de012706794b400c72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.901604 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators_e6837196-3529-4d41-ad3a-103cff3d6fa6_0(9ba0ab130e5da6b122626cfa5160c0955a76340156fde9de012706794b400c72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.901638 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators_e6837196-3529-4d41-ad3a-103cff3d6fa6_0(9ba0ab130e5da6b122626cfa5160c0955a76340156fde9de012706794b400c72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.901702 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators(e6837196-3529-4d41-ad3a-103cff3d6fa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators(e6837196-3529-4d41-ad3a-103cff3d6fa6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators_e6837196-3529-4d41-ad3a-103cff3d6fa6_0(9ba0ab130e5da6b122626cfa5160c0955a76340156fde9de012706794b400c72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" podUID="e6837196-3529-4d41-ad3a-103cff3d6fa6" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.936196 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators_46377460-fbee-4d96-99da-d202b1cf4988_0(7b8b66855487578ecb16f64e4f139b461cbbaa0583a00e7fcbb1ed0771f0aa56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.936300 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators_46377460-fbee-4d96-99da-d202b1cf4988_0(7b8b66855487578ecb16f64e4f139b461cbbaa0583a00e7fcbb1ed0771f0aa56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.936332 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators_46377460-fbee-4d96-99da-d202b1cf4988_0(7b8b66855487578ecb16f64e4f139b461cbbaa0583a00e7fcbb1ed0771f0aa56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:35:53 crc kubenswrapper[4770]: E1209 14:35:53.936389 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators(46377460-fbee-4d96-99da-d202b1cf4988)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators(46377460-fbee-4d96-99da-d202b1cf4988)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators_46377460-fbee-4d96-99da-d202b1cf4988_0(7b8b66855487578ecb16f64e4f139b461cbbaa0583a00e7fcbb1ed0771f0aa56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" podUID="46377460-fbee-4d96-99da-d202b1cf4988" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.971918 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtbzk\" (UniqueName: \"kubernetes.io/projected/b4b6e7d6-2797-4d98-bf28-e8e458a538e3-kube-api-access-dtbzk\") pod \"observability-operator-d8bb48f5d-6zbhb\" (UID: \"b4b6e7d6-2797-4d98-bf28-e8e458a538e3\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.971987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4b6e7d6-2797-4d98-bf28-e8e458a538e3-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6zbhb\" (UID: \"b4b6e7d6-2797-4d98-bf28-e8e458a538e3\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:53 crc kubenswrapper[4770]: I1209 14:35:53.978014 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4b6e7d6-2797-4d98-bf28-e8e458a538e3-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6zbhb\" (UID: \"b4b6e7d6-2797-4d98-bf28-e8e458a538e3\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.006538 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-t4rsk"] Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.007455 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:54 crc kubenswrapper[4770]: W1209 14:35:54.010709 4770 reflector.go:561] object-"openshift-operators"/"perses-operator-dockercfg-h82dh": failed to list *v1.Secret: secrets "perses-operator-dockercfg-h82dh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Dec 09 14:35:54 crc kubenswrapper[4770]: E1209 14:35:54.010825 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"perses-operator-dockercfg-h82dh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"perses-operator-dockercfg-h82dh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.012174 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtbzk\" (UniqueName: \"kubernetes.io/projected/b4b6e7d6-2797-4d98-bf28-e8e458a538e3-kube-api-access-dtbzk\") pod \"observability-operator-d8bb48f5d-6zbhb\" (UID: \"b4b6e7d6-2797-4d98-bf28-e8e458a538e3\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.128526 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:54 crc kubenswrapper[4770]: E1209 14:35:54.166455 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6zbhb_openshift-operators_b4b6e7d6-2797-4d98-bf28-e8e458a538e3_0(023b8f6b815b9e10ed664bae0e0ea976e5de28a58b8d477bc8eeb0523754b719): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:35:54 crc kubenswrapper[4770]: E1209 14:35:54.166523 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6zbhb_openshift-operators_b4b6e7d6-2797-4d98-bf28-e8e458a538e3_0(023b8f6b815b9e10ed664bae0e0ea976e5de28a58b8d477bc8eeb0523754b719): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:54 crc kubenswrapper[4770]: E1209 14:35:54.166543 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6zbhb_openshift-operators_b4b6e7d6-2797-4d98-bf28-e8e458a538e3_0(023b8f6b815b9e10ed664bae0e0ea976e5de28a58b8d477bc8eeb0523754b719): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:35:54 crc kubenswrapper[4770]: E1209 14:35:54.166591 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-6zbhb_openshift-operators(b4b6e7d6-2797-4d98-bf28-e8e458a538e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-6zbhb_openshift-operators(b4b6e7d6-2797-4d98-bf28-e8e458a538e3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6zbhb_openshift-operators_b4b6e7d6-2797-4d98-bf28-e8e458a538e3_0(023b8f6b815b9e10ed664bae0e0ea976e5de28a58b8d477bc8eeb0523754b719): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" podUID="b4b6e7d6-2797-4d98-bf28-e8e458a538e3" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.174253 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbpw\" (UniqueName: \"kubernetes.io/projected/e8747f27-08b4-4075-9275-1f3cf8d5edea-kube-api-access-9fbpw\") pod \"perses-operator-5446b9c989-t4rsk\" (UID: \"e8747f27-08b4-4075-9275-1f3cf8d5edea\") " pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.174312 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8747f27-08b4-4075-9275-1f3cf8d5edea-openshift-service-ca\") pod \"perses-operator-5446b9c989-t4rsk\" (UID: \"e8747f27-08b4-4075-9275-1f3cf8d5edea\") " pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.275920 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8747f27-08b4-4075-9275-1f3cf8d5edea-openshift-service-ca\") pod \"perses-operator-5446b9c989-t4rsk\" (UID: \"e8747f27-08b4-4075-9275-1f3cf8d5edea\") " pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.276081 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbpw\" (UniqueName: \"kubernetes.io/projected/e8747f27-08b4-4075-9275-1f3cf8d5edea-kube-api-access-9fbpw\") pod \"perses-operator-5446b9c989-t4rsk\" (UID: \"e8747f27-08b4-4075-9275-1f3cf8d5edea\") " pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.277357 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8747f27-08b4-4075-9275-1f3cf8d5edea-openshift-service-ca\") pod \"perses-operator-5446b9c989-t4rsk\" (UID: \"e8747f27-08b4-4075-9275-1f3cf8d5edea\") " pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.295566 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbpw\" (UniqueName: \"kubernetes.io/projected/e8747f27-08b4-4075-9275-1f3cf8d5edea-kube-api-access-9fbpw\") pod \"perses-operator-5446b9c989-t4rsk\" (UID: \"e8747f27-08b4-4075-9275-1f3cf8d5edea\") " pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:54 crc kubenswrapper[4770]: I1209 14:35:54.718487 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"24fe3311cdf70315a559c0c2c972aaf1d0385de7d4d8b16df2b95a816779ad76"} Dec 09 14:35:55 crc kubenswrapper[4770]: I1209 14:35:55.040267 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-h82dh" Dec 09 14:35:55 crc kubenswrapper[4770]: I1209 14:35:55.049008 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:55 crc kubenswrapper[4770]: E1209 14:35:55.081844 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-t4rsk_openshift-operators_e8747f27-08b4-4075-9275-1f3cf8d5edea_0(03dc49a3ffb956f6d2e99a7b2a9c84edfa4a9f6e1071cf379048295ba135bc1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:35:55 crc kubenswrapper[4770]: E1209 14:35:55.081921 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-t4rsk_openshift-operators_e8747f27-08b4-4075-9275-1f3cf8d5edea_0(03dc49a3ffb956f6d2e99a7b2a9c84edfa4a9f6e1071cf379048295ba135bc1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:55 crc kubenswrapper[4770]: E1209 14:35:55.081950 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-t4rsk_openshift-operators_e8747f27-08b4-4075-9275-1f3cf8d5edea_0(03dc49a3ffb956f6d2e99a7b2a9c84edfa4a9f6e1071cf379048295ba135bc1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:35:55 crc kubenswrapper[4770]: E1209 14:35:55.082020 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-t4rsk_openshift-operators(e8747f27-08b4-4075-9275-1f3cf8d5edea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-t4rsk_openshift-operators(e8747f27-08b4-4075-9275-1f3cf8d5edea)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-t4rsk_openshift-operators_e8747f27-08b4-4075-9275-1f3cf8d5edea_0(03dc49a3ffb956f6d2e99a7b2a9c84edfa4a9f6e1071cf379048295ba135bc1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" podUID="e8747f27-08b4-4075-9275-1f3cf8d5edea" Dec 09 14:35:55 crc kubenswrapper[4770]: I1209 14:35:55.728806 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"82c39d6f290e99132f537f7d82ed8af1c675f9d542a0e305443813fe565c092b"} Dec 09 14:35:55 crc kubenswrapper[4770]: I1209 14:35:55.729163 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"270a4b4675c1e87aa32b7014c5e1a987cf3e8d3cce1eb07be73e868e4464f5d9"} Dec 09 14:35:55 crc kubenswrapper[4770]: I1209 14:35:55.729175 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"5f2db5febccb7a12682300b8341b7109f3e3915aaee024e4cebc68a4bd38b554"} Dec 09 14:35:56 crc kubenswrapper[4770]: I1209 14:35:56.737081 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"2e373a93eaad0fa55720504f032ec0508e5b16f557ae40a6cbe89b06f60ccf20"} Dec 09 14:35:56 crc kubenswrapper[4770]: I1209 14:35:56.737127 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"c718dddbf84dfe37ad396de930ab5f3d9028e7d6d7d8f016dda9c0189296a548"} Dec 09 14:35:59 crc kubenswrapper[4770]: I1209 14:35:59.406240 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:59 crc kubenswrapper[4770]: I1209 14:35:59.491242 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:35:59 crc kubenswrapper[4770]: I1209 14:35:59.760158 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"0251732a3f042318f130831070fcfccc348ffe5f75a2711aa04d8d9fa6442863"} Dec 09 14:36:00 crc kubenswrapper[4770]: I1209 14:36:00.571029 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6tvz"] Dec 09 14:36:00 crc kubenswrapper[4770]: I1209 14:36:00.766420 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6tvz" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="registry-server" containerID="cri-o://76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9" gracePeriod=2 Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.116027 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.171240 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-utilities\") pod \"cc130d05-d4ba-4206-93ab-7064c5c022ca\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.171379 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-catalog-content\") pod \"cc130d05-d4ba-4206-93ab-7064c5c022ca\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.171420 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l27lp\" (UniqueName: \"kubernetes.io/projected/cc130d05-d4ba-4206-93ab-7064c5c022ca-kube-api-access-l27lp\") pod \"cc130d05-d4ba-4206-93ab-7064c5c022ca\" (UID: \"cc130d05-d4ba-4206-93ab-7064c5c022ca\") " Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.172265 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-utilities" (OuterVolumeSpecName: "utilities") pod "cc130d05-d4ba-4206-93ab-7064c5c022ca" (UID: "cc130d05-d4ba-4206-93ab-7064c5c022ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.176976 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc130d05-d4ba-4206-93ab-7064c5c022ca-kube-api-access-l27lp" (OuterVolumeSpecName: "kube-api-access-l27lp") pod "cc130d05-d4ba-4206-93ab-7064c5c022ca" (UID: "cc130d05-d4ba-4206-93ab-7064c5c022ca"). InnerVolumeSpecName "kube-api-access-l27lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.272955 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l27lp\" (UniqueName: \"kubernetes.io/projected/cc130d05-d4ba-4206-93ab-7064c5c022ca-kube-api-access-l27lp\") on node \"crc\" DevicePath \"\"" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.273272 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.285120 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc130d05-d4ba-4206-93ab-7064c5c022ca" (UID: "cc130d05-d4ba-4206-93ab-7064c5c022ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.374812 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc130d05-d4ba-4206-93ab-7064c5c022ca-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.773964 4770 generic.go:334] "Generic (PLEG): container finished" podID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerID="76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9" exitCode=0 Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.774033 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tvz" event={"ID":"cc130d05-d4ba-4206-93ab-7064c5c022ca","Type":"ContainerDied","Data":"76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9"} Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.774214 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tvz" event={"ID":"cc130d05-d4ba-4206-93ab-7064c5c022ca","Type":"ContainerDied","Data":"5e22512493476e22a548cd4c8c16b40475ecd3c5bc7e8dcc4122e22ce6d887b6"} Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.774064 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6tvz" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.775804 4770 scope.go:117] "RemoveContainer" containerID="76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.791759 4770 scope.go:117] "RemoveContainer" containerID="99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.816903 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6tvz"] Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.819251 4770 scope.go:117] "RemoveContainer" containerID="2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.820366 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6tvz"] Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.832689 4770 scope.go:117] "RemoveContainer" containerID="76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9" Dec 09 14:36:01 crc kubenswrapper[4770]: E1209 14:36:01.833156 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9\": container with ID starting with 76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9 not found: ID does not exist" containerID="76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.833199 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9"} err="failed to get container status \"76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9\": rpc error: code = NotFound desc = could not find container \"76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9\": container with ID starting with 76efaad22472d1cf610826e4dd38bc68d1973e89ff9997c869669f5c538c6eb9 not found: ID does not exist" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.833227 4770 scope.go:117] "RemoveContainer" containerID="99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a" Dec 09 14:36:01 crc kubenswrapper[4770]: E1209 14:36:01.833522 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a\": container with ID starting with 99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a not found: ID does not exist" containerID="99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.833548 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a"} err="failed to get container status \"99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a\": rpc error: code = NotFound desc = could not find container \"99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a\": container with ID starting with 99b8f44596b47860987db44698c138b4243008ab7f35e3ed10640ee495015e4a not found: ID does not exist" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.833564 4770 scope.go:117] "RemoveContainer" containerID="2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d" Dec 09 14:36:01 crc kubenswrapper[4770]: E1209 14:36:01.833801 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d\": container with ID starting with 2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d not found: ID does not exist" containerID="2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d" Dec 09 14:36:01 crc kubenswrapper[4770]: I1209 14:36:01.833822 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d"} err="failed to get container status \"2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d\": rpc error: code = NotFound desc = could not find container \"2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d\": container with ID starting with 2960b2e7d5775fe7c380b9456aa00e7a3029f613e40b0e72e357b9a1492b065d not found: ID does not exist" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.595942 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" path="/var/lib/kubelet/pods/cc130d05-d4ba-4206-93ab-7064c5c022ca/volumes" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.800957 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" event={"ID":"702e680a-b624-4197-94c8-58bbf2c63186","Type":"ContainerStarted","Data":"ab0cab68420f5ceda57fb67ae0554ca296ff29ff056a4f72cc0c40d6949a7e62"} Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.801443 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.832941 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.839319 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" podStartSLOduration=12.83929735 podStartE2EDuration="12.83929735s" podCreationTimestamp="2025-12-09 14:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:36:02.834348666 +0000 UTC m=+794.730550802" watchObservedRunningTime="2025-12-09 14:36:02.83929735 +0000 UTC m=+794.735499486" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.877248 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-t4rsk"] Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.877400 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.878021 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.885331 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g"] Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.885470 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.885927 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.890277 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6zbhb"] Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.890423 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.890874 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.895512 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5"] Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.895648 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.896099 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.900100 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg"] Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.900204 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:36:02 crc kubenswrapper[4770]: I1209 14:36:02.900501 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.936546 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-t4rsk_openshift-operators_e8747f27-08b4-4075-9275-1f3cf8d5edea_0(0c623e337c26150a9b41bfacd68230a0c2da06d436aba0f0b8cc7ad8075df87a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.936611 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-t4rsk_openshift-operators_e8747f27-08b4-4075-9275-1f3cf8d5edea_0(0c623e337c26150a9b41bfacd68230a0c2da06d436aba0f0b8cc7ad8075df87a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.936632 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-t4rsk_openshift-operators_e8747f27-08b4-4075-9275-1f3cf8d5edea_0(0c623e337c26150a9b41bfacd68230a0c2da06d436aba0f0b8cc7ad8075df87a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.936685 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-t4rsk_openshift-operators(e8747f27-08b4-4075-9275-1f3cf8d5edea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-t4rsk_openshift-operators(e8747f27-08b4-4075-9275-1f3cf8d5edea)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-t4rsk_openshift-operators_e8747f27-08b4-4075-9275-1f3cf8d5edea_0(0c623e337c26150a9b41bfacd68230a0c2da06d436aba0f0b8cc7ad8075df87a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" podUID="e8747f27-08b4-4075-9275-1f3cf8d5edea" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.944932 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators_80dad146-0216-45e2-9007-7c42769b1cde_0(39738da1a62ed32f7c6fc40e20d78d98e7bfafada0882541846a74777e47d2f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.945374 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators_80dad146-0216-45e2-9007-7c42769b1cde_0(39738da1a62ed32f7c6fc40e20d78d98e7bfafada0882541846a74777e47d2f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.945395 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators_80dad146-0216-45e2-9007-7c42769b1cde_0(39738da1a62ed32f7c6fc40e20d78d98e7bfafada0882541846a74777e47d2f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.945451 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators(80dad146-0216-45e2-9007-7c42769b1cde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators(80dad146-0216-45e2-9007-7c42769b1cde)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gtnb5_openshift-operators_80dad146-0216-45e2-9007-7c42769b1cde_0(39738da1a62ed32f7c6fc40e20d78d98e7bfafada0882541846a74777e47d2f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" podUID="80dad146-0216-45e2-9007-7c42769b1cde" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.955282 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators_46377460-fbee-4d96-99da-d202b1cf4988_0(f8cc15959299e2e5a61857641f09e244a0bbf8ab3bbc16b83882241ed9a0ec87): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.955349 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators_46377460-fbee-4d96-99da-d202b1cf4988_0(f8cc15959299e2e5a61857641f09e244a0bbf8ab3bbc16b83882241ed9a0ec87): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.955373 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators_46377460-fbee-4d96-99da-d202b1cf4988_0(f8cc15959299e2e5a61857641f09e244a0bbf8ab3bbc16b83882241ed9a0ec87): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.955424 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators(46377460-fbee-4d96-99da-d202b1cf4988)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators(46377460-fbee-4d96-99da-d202b1cf4988)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_openshift-operators_46377460-fbee-4d96-99da-d202b1cf4988_0(f8cc15959299e2e5a61857641f09e244a0bbf8ab3bbc16b83882241ed9a0ec87): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" podUID="46377460-fbee-4d96-99da-d202b1cf4988" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.968137 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6zbhb_openshift-operators_b4b6e7d6-2797-4d98-bf28-e8e458a538e3_0(5166c5ccd70d433261aea95c94deb94586a985eadca3a348eb8cd59e8a130354): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.968205 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6zbhb_openshift-operators_b4b6e7d6-2797-4d98-bf28-e8e458a538e3_0(5166c5ccd70d433261aea95c94deb94586a985eadca3a348eb8cd59e8a130354): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.968228 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6zbhb_openshift-operators_b4b6e7d6-2797-4d98-bf28-e8e458a538e3_0(5166c5ccd70d433261aea95c94deb94586a985eadca3a348eb8cd59e8a130354): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.968274 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-6zbhb_openshift-operators(b4b6e7d6-2797-4d98-bf28-e8e458a538e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-6zbhb_openshift-operators(b4b6e7d6-2797-4d98-bf28-e8e458a538e3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6zbhb_openshift-operators_b4b6e7d6-2797-4d98-bf28-e8e458a538e3_0(5166c5ccd70d433261aea95c94deb94586a985eadca3a348eb8cd59e8a130354): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" podUID="b4b6e7d6-2797-4d98-bf28-e8e458a538e3" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.970769 4770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators_e6837196-3529-4d41-ad3a-103cff3d6fa6_0(cced7890f56e6edb7e39f0c50b4e753189fab441271257a772e6c7de2f43b504): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.970796 4770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators_e6837196-3529-4d41-ad3a-103cff3d6fa6_0(cced7890f56e6edb7e39f0c50b4e753189fab441271257a772e6c7de2f43b504): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.970810 4770 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators_e6837196-3529-4d41-ad3a-103cff3d6fa6_0(cced7890f56e6edb7e39f0c50b4e753189fab441271257a772e6c7de2f43b504): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:36:02 crc kubenswrapper[4770]: E1209 14:36:02.970833 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators(e6837196-3529-4d41-ad3a-103cff3d6fa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators(e6837196-3529-4d41-ad3a-103cff3d6fa6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_openshift-operators_e6837196-3529-4d41-ad3a-103cff3d6fa6_0(cced7890f56e6edb7e39f0c50b4e753189fab441271257a772e6c7de2f43b504): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" podUID="e6837196-3529-4d41-ad3a-103cff3d6fa6" Dec 09 14:36:03 crc kubenswrapper[4770]: I1209 14:36:03.811309 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:36:03 crc kubenswrapper[4770]: I1209 14:36:03.812067 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:36:03 crc kubenswrapper[4770]: I1209 14:36:03.840092 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:36:13 crc kubenswrapper[4770]: I1209 14:36:13.588037 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:36:13 crc kubenswrapper[4770]: I1209 14:36:13.588053 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:36:13 crc kubenswrapper[4770]: I1209 14:36:13.589853 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:36:13 crc kubenswrapper[4770]: I1209 14:36:13.589944 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" Dec 09 14:36:13 crc kubenswrapper[4770]: I1209 14:36:13.963915 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6zbhb"] Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.124167 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g"] Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.243616 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.243750 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.243827 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.244576 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a2d7351a37474ac1217b67bfcec9a9843ac24bf4743146b36e820021a607f34"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.244653 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://7a2d7351a37474ac1217b67bfcec9a9843ac24bf4743146b36e820021a607f34" gracePeriod=600 Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.893207 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" event={"ID":"e6837196-3529-4d41-ad3a-103cff3d6fa6","Type":"ContainerStarted","Data":"c103df3b7c32c2ef65a40885a278d7c91bf58d9143fedf2fbfdb49b49ca8fae0"} Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.898455 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="7a2d7351a37474ac1217b67bfcec9a9843ac24bf4743146b36e820021a607f34" exitCode=0 Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.898501 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"7a2d7351a37474ac1217b67bfcec9a9843ac24bf4743146b36e820021a607f34"} Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.898581 4770 scope.go:117] "RemoveContainer" containerID="c4143fcf6193bb8d37b6aa9f74630ea967df19039b5e904f79a07122fe7fe763" Dec 09 14:36:14 crc kubenswrapper[4770]: I1209 14:36:14.901999 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" event={"ID":"b4b6e7d6-2797-4d98-bf28-e8e458a538e3","Type":"ContainerStarted","Data":"f4df8245689307109735ff92e1846937735a0bb9a3f0e1e0e64c7c40c159c6b9"} Dec 09 14:36:15 crc kubenswrapper[4770]: I1209 14:36:15.588258 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:36:15 crc kubenswrapper[4770]: I1209 14:36:15.588349 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:36:15 crc kubenswrapper[4770]: I1209 14:36:15.588354 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:36:15 crc kubenswrapper[4770]: I1209 14:36:15.589182 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" Dec 09 14:36:15 crc kubenswrapper[4770]: I1209 14:36:15.589585 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:36:15 crc kubenswrapper[4770]: I1209 14:36:15.589760 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" Dec 09 14:36:15 crc kubenswrapper[4770]: I1209 14:36:15.948571 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5"] Dec 09 14:36:15 crc kubenswrapper[4770]: I1209 14:36:15.976852 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"852cc9377060614876d64ddd5f4b7f4f0b5e6e1aa2ca6cd6b6ccad59c9c4ae81"} Dec 09 14:36:16 crc kubenswrapper[4770]: I1209 14:36:16.006982 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-t4rsk"] Dec 09 14:36:16 crc kubenswrapper[4770]: I1209 14:36:16.126002 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg"] Dec 09 14:36:16 crc kubenswrapper[4770]: I1209 14:36:16.990211 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" event={"ID":"46377460-fbee-4d96-99da-d202b1cf4988","Type":"ContainerStarted","Data":"73e3d5671123e5195dbb2e395b9a56ccc81571b74e8e3b964c376d5822df04c8"} Dec 09 14:36:16 crc kubenswrapper[4770]: I1209 14:36:16.995530 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" event={"ID":"e8747f27-08b4-4075-9275-1f3cf8d5edea","Type":"ContainerStarted","Data":"e3360bf16f2f167b14302f6853d23da6fb20a91c620f7d5508da05fbc892f676"} Dec 09 14:36:16 crc kubenswrapper[4770]: I1209 14:36:16.998277 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" event={"ID":"80dad146-0216-45e2-9007-7c42769b1cde","Type":"ContainerStarted","Data":"8e2024d5e669c7bd8e4cc97b4796af19ab722b5c0fa1a142d781b068498db0ca"} Dec 09 14:36:20 crc kubenswrapper[4770]: I1209 14:36:20.903573 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7vpdp" Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.098012 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" event={"ID":"e6837196-3529-4d41-ad3a-103cff3d6fa6","Type":"ContainerStarted","Data":"ede78cf1b1598d112c30b20c328acfd403c1337bee8eaddc1eef87ecd24b61ed"} Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.100013 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" event={"ID":"e8747f27-08b4-4075-9275-1f3cf8d5edea","Type":"ContainerStarted","Data":"54a5c122876f3ef5f81738ec578e550b8885946008b82eb9447c72f0406d773d"} Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.100310 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.102424 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" event={"ID":"b4b6e7d6-2797-4d98-bf28-e8e458a538e3","Type":"ContainerStarted","Data":"47a13005bdcd8d8010426424e0ab911f4af850deb9ea5cc5fa6f67eee41bc1ec"} Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.102632 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.104433 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" event={"ID":"80dad146-0216-45e2-9007-7c42769b1cde","Type":"ContainerStarted","Data":"50ed852fc81c2622123f1c91326a07749502e8229dddce1211017897eec125be"} Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.106133 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.106705 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" event={"ID":"46377460-fbee-4d96-99da-d202b1cf4988","Type":"ContainerStarted","Data":"8a2107ba448e59760a21526f0cb42205f3ed8d2c6174145e30c59981f2c514f7"} Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.118615 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-tm97g" podStartSLOduration=22.321476644 podStartE2EDuration="36.118593028s" podCreationTimestamp="2025-12-09 14:35:53 +0000 UTC" firstStartedPulling="2025-12-09 14:36:14.131566195 +0000 UTC m=+806.027768331" lastFinishedPulling="2025-12-09 14:36:27.928682579 +0000 UTC m=+819.824884715" observedRunningTime="2025-12-09 14:36:29.116928261 +0000 UTC m=+821.013130397" watchObservedRunningTime="2025-12-09 14:36:29.118593028 +0000 UTC m=+821.014795164" Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.149563 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-6zbhb" podStartSLOduration=22.19819821 podStartE2EDuration="36.149539781s" podCreationTimestamp="2025-12-09 14:35:53 +0000 UTC" firstStartedPulling="2025-12-09 14:36:13.982527599 +0000 UTC m=+805.878729735" lastFinishedPulling="2025-12-09 14:36:27.93386917 +0000 UTC m=+819.830071306" observedRunningTime="2025-12-09 14:36:29.145578126 +0000 UTC m=+821.041780272" watchObservedRunningTime="2025-12-09 14:36:29.149539781 +0000 UTC m=+821.045741917" Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.211337 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg" podStartSLOduration=24.426691978 podStartE2EDuration="36.211271361s" podCreationTimestamp="2025-12-09 14:35:53 +0000 UTC" firstStartedPulling="2025-12-09 14:36:16.148420962 +0000 UTC m=+808.044623098" lastFinishedPulling="2025-12-09 14:36:27.933000335 +0000 UTC m=+819.829202481" observedRunningTime="2025-12-09 14:36:29.195577774 +0000 UTC m=+821.091779910" watchObservedRunningTime="2025-12-09 14:36:29.211271361 +0000 UTC m=+821.107473497" Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.229577 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" podStartSLOduration=24.332447761 podStartE2EDuration="36.229551595s" podCreationTimestamp="2025-12-09 14:35:53 +0000 UTC" firstStartedPulling="2025-12-09 14:36:16.076378722 +0000 UTC m=+807.972580858" lastFinishedPulling="2025-12-09 14:36:27.973482546 +0000 UTC m=+819.869684692" observedRunningTime="2025-12-09 14:36:29.224638241 +0000 UTC m=+821.120840377" watchObservedRunningTime="2025-12-09 14:36:29.229551595 +0000 UTC m=+821.125753731" Dec 09 14:36:29 crc kubenswrapper[4770]: I1209 14:36:29.251584 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gtnb5" podStartSLOduration=24.283140655 podStartE2EDuration="36.251559037s" podCreationTimestamp="2025-12-09 14:35:53 +0000 UTC" firstStartedPulling="2025-12-09 14:36:16.000576762 +0000 UTC m=+807.896778898" lastFinishedPulling="2025-12-09 14:36:27.968995104 +0000 UTC m=+819.865197280" observedRunningTime="2025-12-09 14:36:29.242278555 +0000 UTC m=+821.138480691" watchObservedRunningTime="2025-12-09 14:36:29.251559037 +0000 UTC m=+821.147761173" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.051928 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-t4rsk" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.820203 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dp5sc"] Dec 09 14:36:35 crc kubenswrapper[4770]: E1209 14:36:35.820679 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="extract-content" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.820796 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="extract-content" Dec 09 14:36:35 crc kubenswrapper[4770]: E1209 14:36:35.820869 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="registry-server" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.820924 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="registry-server" Dec 09 14:36:35 crc kubenswrapper[4770]: E1209 14:36:35.820979 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="extract-utilities" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.821030 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="extract-utilities" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.821185 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc130d05-d4ba-4206-93ab-7064c5c022ca" containerName="registry-server" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.821625 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-dp5sc" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.824643 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fh5h9" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.824747 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.837246 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.841325 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hfpn8"] Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.842349 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hfpn8" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.844354 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dp5sc"] Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.849333 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-clqns" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.851359 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x7zjn"] Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.852123 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.861565 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-x7wlj" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.866739 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hfpn8"] Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.871942 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x7zjn"] Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.908274 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfgb2\" (UniqueName: \"kubernetes.io/projected/b035c358-ae16-4c56-adcb-4d271a8f6006-kube-api-access-xfgb2\") pod \"cert-manager-cainjector-7f985d654d-dp5sc\" (UID: \"b035c358-ae16-4c56-adcb-4d271a8f6006\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dp5sc" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.908369 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5wd7\" (UniqueName: \"kubernetes.io/projected/73b1fc7b-59c7-4427-932c-2a65a41c42e1-kube-api-access-p5wd7\") pod \"cert-manager-webhook-5655c58dd6-x7zjn\" (UID: \"73b1fc7b-59c7-4427-932c-2a65a41c42e1\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" Dec 09 14:36:35 crc kubenswrapper[4770]: I1209 14:36:35.908601 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ksm\" (UniqueName: \"kubernetes.io/projected/bb2fb259-5b01-4326-b16b-891048fa2e18-kube-api-access-l8ksm\") pod \"cert-manager-5b446d88c5-hfpn8\" (UID: \"bb2fb259-5b01-4326-b16b-891048fa2e18\") " pod="cert-manager/cert-manager-5b446d88c5-hfpn8" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.009737 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ksm\" (UniqueName: \"kubernetes.io/projected/bb2fb259-5b01-4326-b16b-891048fa2e18-kube-api-access-l8ksm\") pod \"cert-manager-5b446d88c5-hfpn8\" (UID: \"bb2fb259-5b01-4326-b16b-891048fa2e18\") " pod="cert-manager/cert-manager-5b446d88c5-hfpn8" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.009826 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfgb2\" (UniqueName: \"kubernetes.io/projected/b035c358-ae16-4c56-adcb-4d271a8f6006-kube-api-access-xfgb2\") pod \"cert-manager-cainjector-7f985d654d-dp5sc\" (UID: \"b035c358-ae16-4c56-adcb-4d271a8f6006\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dp5sc" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.009879 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5wd7\" (UniqueName: \"kubernetes.io/projected/73b1fc7b-59c7-4427-932c-2a65a41c42e1-kube-api-access-p5wd7\") pod \"cert-manager-webhook-5655c58dd6-x7zjn\" (UID: \"73b1fc7b-59c7-4427-932c-2a65a41c42e1\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.035930 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5wd7\" (UniqueName: \"kubernetes.io/projected/73b1fc7b-59c7-4427-932c-2a65a41c42e1-kube-api-access-p5wd7\") pod \"cert-manager-webhook-5655c58dd6-x7zjn\" (UID: \"73b1fc7b-59c7-4427-932c-2a65a41c42e1\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.035951 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ksm\" (UniqueName: \"kubernetes.io/projected/bb2fb259-5b01-4326-b16b-891048fa2e18-kube-api-access-l8ksm\") pod \"cert-manager-5b446d88c5-hfpn8\" (UID: \"bb2fb259-5b01-4326-b16b-891048fa2e18\") " pod="cert-manager/cert-manager-5b446d88c5-hfpn8" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.036004 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfgb2\" (UniqueName: \"kubernetes.io/projected/b035c358-ae16-4c56-adcb-4d271a8f6006-kube-api-access-xfgb2\") pod \"cert-manager-cainjector-7f985d654d-dp5sc\" (UID: \"b035c358-ae16-4c56-adcb-4d271a8f6006\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dp5sc" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.136604 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-dp5sc" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.155317 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hfpn8" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.171663 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.411707 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dp5sc"] Dec 09 14:36:36 crc kubenswrapper[4770]: W1209 14:36:36.429709 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb035c358_ae16_4c56_adcb_4d271a8f6006.slice/crio-1adeb887d8e996af7ec7340dec1a64407b37338bf01b233854b4086323b14be1 WatchSource:0}: Error finding container 1adeb887d8e996af7ec7340dec1a64407b37338bf01b233854b4086323b14be1: Status 404 returned error can't find the container with id 1adeb887d8e996af7ec7340dec1a64407b37338bf01b233854b4086323b14be1 Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.457892 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hfpn8"] Dec 09 14:36:36 crc kubenswrapper[4770]: I1209 14:36:36.503075 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x7zjn"] Dec 09 14:36:37 crc kubenswrapper[4770]: I1209 14:36:37.154828 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hfpn8" event={"ID":"bb2fb259-5b01-4326-b16b-891048fa2e18","Type":"ContainerStarted","Data":"c084f80be3b40f37129d366a01eed24d2025203e55f8323c1f577b55a67da366"} Dec 09 14:36:37 crc kubenswrapper[4770]: I1209 14:36:37.156221 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" event={"ID":"73b1fc7b-59c7-4427-932c-2a65a41c42e1","Type":"ContainerStarted","Data":"e97f237dcad27c4822a791af8d8587bc3c25ea15c9efd080591983d7c455d9c3"} Dec 09 14:36:37 crc kubenswrapper[4770]: I1209 14:36:37.161618 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-dp5sc" event={"ID":"b035c358-ae16-4c56-adcb-4d271a8f6006","Type":"ContainerStarted","Data":"1adeb887d8e996af7ec7340dec1a64407b37338bf01b233854b4086323b14be1"} Dec 09 14:36:42 crc kubenswrapper[4770]: I1209 14:36:42.196641 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hfpn8" event={"ID":"bb2fb259-5b01-4326-b16b-891048fa2e18","Type":"ContainerStarted","Data":"37a8351af2dddc86b60ed77d13f6dd79e44d9248d7de6bb8c8a4928567e66726"} Dec 09 14:36:42 crc kubenswrapper[4770]: I1209 14:36:42.200829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" event={"ID":"73b1fc7b-59c7-4427-932c-2a65a41c42e1","Type":"ContainerStarted","Data":"37b0d25fee9654aea2dbe6884386079558505284434713f212ed8d4febe3aa76"} Dec 09 14:36:42 crc kubenswrapper[4770]: I1209 14:36:42.200932 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" Dec 09 14:36:42 crc kubenswrapper[4770]: I1209 14:36:42.202779 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-dp5sc" event={"ID":"b035c358-ae16-4c56-adcb-4d271a8f6006","Type":"ContainerStarted","Data":"91ec142297842a061fc3960e3821a95f81ef7edd04c800e52dcc66cc868bffa3"} Dec 09 14:36:42 crc kubenswrapper[4770]: I1209 14:36:42.217825 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-hfpn8" podStartSLOduration=2.619999162 podStartE2EDuration="7.217800028s" podCreationTimestamp="2025-12-09 14:36:35 +0000 UTC" firstStartedPulling="2025-12-09 14:36:36.469882652 +0000 UTC m=+828.366084788" lastFinishedPulling="2025-12-09 14:36:41.067683528 +0000 UTC m=+832.963885654" observedRunningTime="2025-12-09 14:36:42.212305038 +0000 UTC m=+834.108507174" watchObservedRunningTime="2025-12-09 14:36:42.217800028 +0000 UTC m=+834.114002174" Dec 09 14:36:42 crc kubenswrapper[4770]: I1209 14:36:42.237313 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" podStartSLOduration=2.634402523 podStartE2EDuration="7.237253615s" podCreationTimestamp="2025-12-09 14:36:35 +0000 UTC" firstStartedPulling="2025-12-09 14:36:36.503207034 +0000 UTC m=+828.399409170" lastFinishedPulling="2025-12-09 14:36:41.106058126 +0000 UTC m=+833.002260262" observedRunningTime="2025-12-09 14:36:42.228980594 +0000 UTC m=+834.125182730" watchObservedRunningTime="2025-12-09 14:36:42.237253615 +0000 UTC m=+834.133455791" Dec 09 14:36:42 crc kubenswrapper[4770]: I1209 14:36:42.250022 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-dp5sc" podStartSLOduration=2.626282678 podStartE2EDuration="7.250001388s" podCreationTimestamp="2025-12-09 14:36:35 +0000 UTC" firstStartedPulling="2025-12-09 14:36:36.434184062 +0000 UTC m=+828.330386198" lastFinishedPulling="2025-12-09 14:36:41.057902772 +0000 UTC m=+832.954104908" observedRunningTime="2025-12-09 14:36:42.24599472 +0000 UTC m=+834.142196856" watchObservedRunningTime="2025-12-09 14:36:42.250001388 +0000 UTC m=+834.146203524" Dec 09 14:36:46 crc kubenswrapper[4770]: I1209 14:36:46.175830 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-x7zjn" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.168589 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk"] Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.170445 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.173505 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.177318 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk"] Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.301176 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h2pg\" (UniqueName: \"kubernetes.io/projected/d603f7c6-6898-40e0-a1ba-8411253059af-kube-api-access-9h2pg\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.301300 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.301333 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.402141 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.402226 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.402264 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h2pg\" (UniqueName: \"kubernetes.io/projected/d603f7c6-6898-40e0-a1ba-8411253059af-kube-api-access-9h2pg\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.403228 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.403510 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.423164 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h2pg\" (UniqueName: \"kubernetes.io/projected/d603f7c6-6898-40e0-a1ba-8411253059af-kube-api-access-9h2pg\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.487338 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:11 crc kubenswrapper[4770]: I1209 14:37:11.700753 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk"] Dec 09 14:37:12 crc kubenswrapper[4770]: I1209 14:37:12.400834 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" event={"ID":"d603f7c6-6898-40e0-a1ba-8411253059af","Type":"ContainerStarted","Data":"b05fe1940d9b2536303e0c998063ffaf069aeac928c661f2c6696b8edd0b37db"} Dec 09 14:37:13 crc kubenswrapper[4770]: I1209 14:37:13.407206 4770 generic.go:334] "Generic (PLEG): container finished" podID="d603f7c6-6898-40e0-a1ba-8411253059af" containerID="0ddc60817b3cced2b71137b4b626a8c346e53b18e67d97ea4e621ed82083932e" exitCode=0 Dec 09 14:37:13 crc kubenswrapper[4770]: I1209 14:37:13.407467 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" event={"ID":"d603f7c6-6898-40e0-a1ba-8411253059af","Type":"ContainerDied","Data":"0ddc60817b3cced2b71137b4b626a8c346e53b18e67d97ea4e621ed82083932e"} Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.268466 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.269226 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.271330 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.271487 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.271489 4770 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-jpzd2" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.277626 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.446469 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-674cb\" (UniqueName: \"kubernetes.io/projected/04e2215e-dd71-4993-a2fe-6513ab5f9faf-kube-api-access-674cb\") pod \"minio\" (UID: \"04e2215e-dd71-4993-a2fe-6513ab5f9faf\") " pod="minio-dev/minio" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.446548 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61daf40b-ea5b-4002-91db-6fd8f985d833\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61daf40b-ea5b-4002-91db-6fd8f985d833\") pod \"minio\" (UID: \"04e2215e-dd71-4993-a2fe-6513ab5f9faf\") " pod="minio-dev/minio" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.548295 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61daf40b-ea5b-4002-91db-6fd8f985d833\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61daf40b-ea5b-4002-91db-6fd8f985d833\") pod \"minio\" (UID: \"04e2215e-dd71-4993-a2fe-6513ab5f9faf\") " pod="minio-dev/minio" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.548933 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-674cb\" (UniqueName: \"kubernetes.io/projected/04e2215e-dd71-4993-a2fe-6513ab5f9faf-kube-api-access-674cb\") pod \"minio\" (UID: \"04e2215e-dd71-4993-a2fe-6513ab5f9faf\") " pod="minio-dev/minio" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.556302 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.556350 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61daf40b-ea5b-4002-91db-6fd8f985d833\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61daf40b-ea5b-4002-91db-6fd8f985d833\") pod \"minio\" (UID: \"04e2215e-dd71-4993-a2fe-6513ab5f9faf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8e854c41e9fa5a447ad4344338a92d34bf5e08f2cb1509c4ac21837b322d815/globalmount\"" pod="minio-dev/minio" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.576756 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-674cb\" (UniqueName: \"kubernetes.io/projected/04e2215e-dd71-4993-a2fe-6513ab5f9faf-kube-api-access-674cb\") pod \"minio\" (UID: \"04e2215e-dd71-4993-a2fe-6513ab5f9faf\") " pod="minio-dev/minio" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.581650 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61daf40b-ea5b-4002-91db-6fd8f985d833\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61daf40b-ea5b-4002-91db-6fd8f985d833\") pod \"minio\" (UID: \"04e2215e-dd71-4993-a2fe-6513ab5f9faf\") " pod="minio-dev/minio" Dec 09 14:37:14 crc kubenswrapper[4770]: I1209 14:37:14.587622 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 09 14:37:15 crc kubenswrapper[4770]: I1209 14:37:15.032311 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 09 14:37:15 crc kubenswrapper[4770]: W1209 14:37:15.033803 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04e2215e_dd71_4993_a2fe_6513ab5f9faf.slice/crio-7b435eaf0691e4e6e7d67be469a5b20cda9d5b4e2813ac724af64d6a14b3973a WatchSource:0}: Error finding container 7b435eaf0691e4e6e7d67be469a5b20cda9d5b4e2813ac724af64d6a14b3973a: Status 404 returned error can't find the container with id 7b435eaf0691e4e6e7d67be469a5b20cda9d5b4e2813ac724af64d6a14b3973a Dec 09 14:37:15 crc kubenswrapper[4770]: I1209 14:37:15.421405 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"04e2215e-dd71-4993-a2fe-6513ab5f9faf","Type":"ContainerStarted","Data":"7b435eaf0691e4e6e7d67be469a5b20cda9d5b4e2813ac724af64d6a14b3973a"} Dec 09 14:37:15 crc kubenswrapper[4770]: I1209 14:37:15.424994 4770 generic.go:334] "Generic (PLEG): container finished" podID="d603f7c6-6898-40e0-a1ba-8411253059af" containerID="702bfbcabaab80427017f1e4e4793d5a73cae2776f5845d4e1fb80b871dfdea3" exitCode=0 Dec 09 14:37:15 crc kubenswrapper[4770]: I1209 14:37:15.425075 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" event={"ID":"d603f7c6-6898-40e0-a1ba-8411253059af","Type":"ContainerDied","Data":"702bfbcabaab80427017f1e4e4793d5a73cae2776f5845d4e1fb80b871dfdea3"} Dec 09 14:37:16 crc kubenswrapper[4770]: I1209 14:37:16.431212 4770 generic.go:334] "Generic (PLEG): container finished" podID="d603f7c6-6898-40e0-a1ba-8411253059af" containerID="b8d60551a100fb18296cb52f879c0ed9a40c3da519d124fe41b6646b8c49f718" exitCode=0 Dec 09 14:37:16 crc kubenswrapper[4770]: I1209 14:37:16.431260 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" event={"ID":"d603f7c6-6898-40e0-a1ba-8411253059af","Type":"ContainerDied","Data":"b8d60551a100fb18296cb52f879c0ed9a40c3da519d124fe41b6646b8c49f718"} Dec 09 14:37:17 crc kubenswrapper[4770]: I1209 14:37:17.853183 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:17 crc kubenswrapper[4770]: I1209 14:37:17.999026 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h2pg\" (UniqueName: \"kubernetes.io/projected/d603f7c6-6898-40e0-a1ba-8411253059af-kube-api-access-9h2pg\") pod \"d603f7c6-6898-40e0-a1ba-8411253059af\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " Dec 09 14:37:17 crc kubenswrapper[4770]: I1209 14:37:17.999108 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-bundle\") pod \"d603f7c6-6898-40e0-a1ba-8411253059af\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " Dec 09 14:37:17 crc kubenswrapper[4770]: I1209 14:37:17.999132 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-util\") pod \"d603f7c6-6898-40e0-a1ba-8411253059af\" (UID: \"d603f7c6-6898-40e0-a1ba-8411253059af\") " Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.000176 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-bundle" (OuterVolumeSpecName: "bundle") pod "d603f7c6-6898-40e0-a1ba-8411253059af" (UID: "d603f7c6-6898-40e0-a1ba-8411253059af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.006302 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d603f7c6-6898-40e0-a1ba-8411253059af-kube-api-access-9h2pg" (OuterVolumeSpecName: "kube-api-access-9h2pg") pod "d603f7c6-6898-40e0-a1ba-8411253059af" (UID: "d603f7c6-6898-40e0-a1ba-8411253059af"). InnerVolumeSpecName "kube-api-access-9h2pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.010167 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-util" (OuterVolumeSpecName: "util") pod "d603f7c6-6898-40e0-a1ba-8411253059af" (UID: "d603f7c6-6898-40e0-a1ba-8411253059af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.100653 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h2pg\" (UniqueName: \"kubernetes.io/projected/d603f7c6-6898-40e0-a1ba-8411253059af-kube-api-access-9h2pg\") on node \"crc\" DevicePath \"\"" Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.100765 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.100785 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d603f7c6-6898-40e0-a1ba-8411253059af-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.444516 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.444515 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk" event={"ID":"d603f7c6-6898-40e0-a1ba-8411253059af","Type":"ContainerDied","Data":"b05fe1940d9b2536303e0c998063ffaf069aeac928c661f2c6696b8edd0b37db"} Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.444593 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b05fe1940d9b2536303e0c998063ffaf069aeac928c661f2c6696b8edd0b37db" Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.445661 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"04e2215e-dd71-4993-a2fe-6513ab5f9faf","Type":"ContainerStarted","Data":"fccf65ccfb05a7a0357f426358ae825688fff7cb1b1f5225e07d7e28304312c4"} Dec 09 14:37:18 crc kubenswrapper[4770]: I1209 14:37:18.467992 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.181745817 podStartE2EDuration="6.46797321s" podCreationTimestamp="2025-12-09 14:37:12 +0000 UTC" firstStartedPulling="2025-12-09 14:37:15.035530835 +0000 UTC m=+866.931732981" lastFinishedPulling="2025-12-09 14:37:18.321758238 +0000 UTC m=+870.217960374" observedRunningTime="2025-12-09 14:37:18.462070017 +0000 UTC m=+870.358272163" watchObservedRunningTime="2025-12-09 14:37:18.46797321 +0000 UTC m=+870.364175346" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.165111 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92"] Dec 09 14:37:23 crc kubenswrapper[4770]: E1209 14:37:23.165906 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d603f7c6-6898-40e0-a1ba-8411253059af" containerName="extract" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.165923 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d603f7c6-6898-40e0-a1ba-8411253059af" containerName="extract" Dec 09 14:37:23 crc kubenswrapper[4770]: E1209 14:37:23.165939 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d603f7c6-6898-40e0-a1ba-8411253059af" containerName="util" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.165947 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d603f7c6-6898-40e0-a1ba-8411253059af" containerName="util" Dec 09 14:37:23 crc kubenswrapper[4770]: E1209 14:37:23.165955 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d603f7c6-6898-40e0-a1ba-8411253059af" containerName="pull" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.165962 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d603f7c6-6898-40e0-a1ba-8411253059af" containerName="pull" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.166069 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d603f7c6-6898-40e0-a1ba-8411253059af" containerName="extract" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.166628 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.169193 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.169399 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.170864 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.176059 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.176506 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.176641 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-gftgh" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.181461 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92"] Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.363745 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzcp\" (UniqueName: \"kubernetes.io/projected/a562e446-afad-41e8-9169-41f2e14712a2-kube-api-access-jmzcp\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.363816 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.363848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-apiservice-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.363984 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-webhook-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.364043 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/a562e446-afad-41e8-9169-41f2e14712a2-manager-config\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.465237 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzcp\" (UniqueName: \"kubernetes.io/projected/a562e446-afad-41e8-9169-41f2e14712a2-kube-api-access-jmzcp\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.465305 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.465327 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-apiservice-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.465366 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-webhook-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.465383 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/a562e446-afad-41e8-9169-41f2e14712a2-manager-config\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.466420 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/a562e446-afad-41e8-9169-41f2e14712a2-manager-config\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.473117 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-webhook-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.475378 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.489511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzcp\" (UniqueName: \"kubernetes.io/projected/a562e446-afad-41e8-9169-41f2e14712a2-kube-api-access-jmzcp\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.492472 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a562e446-afad-41e8-9169-41f2e14712a2-apiservice-cert\") pod \"loki-operator-controller-manager-7f5c5648d4-9lc92\" (UID: \"a562e446-afad-41e8-9169-41f2e14712a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:23 crc kubenswrapper[4770]: I1209 14:37:23.782661 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:24 crc kubenswrapper[4770]: I1209 14:37:24.027998 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92"] Dec 09 14:37:24 crc kubenswrapper[4770]: I1209 14:37:24.491598 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" event={"ID":"a562e446-afad-41e8-9169-41f2e14712a2","Type":"ContainerStarted","Data":"72e8d70558b62d88dad4a6a942d5286d5f54d474adb7d224502c7b6d11cb2ab9"} Dec 09 14:37:29 crc kubenswrapper[4770]: I1209 14:37:29.527716 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" event={"ID":"a562e446-afad-41e8-9169-41f2e14712a2","Type":"ContainerStarted","Data":"c7fe0cad98449d3a9f11e15b256ab8be73bdddd727734f393c5004b2ceccfb17"} Dec 09 14:37:36 crc kubenswrapper[4770]: I1209 14:37:36.567608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" event={"ID":"a562e446-afad-41e8-9169-41f2e14712a2","Type":"ContainerStarted","Data":"f067057e7a9087d189a93dc19c55bf886896caab81ba0447474087a7c0ccc341"} Dec 09 14:37:36 crc kubenswrapper[4770]: I1209 14:37:36.570504 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:36 crc kubenswrapper[4770]: I1209 14:37:36.571030 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" Dec 09 14:37:36 crc kubenswrapper[4770]: I1209 14:37:36.591961 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5c5648d4-9lc92" podStartSLOduration=1.91731166 podStartE2EDuration="13.591943731s" podCreationTimestamp="2025-12-09 14:37:23 +0000 UTC" firstStartedPulling="2025-12-09 14:37:24.037909257 +0000 UTC m=+875.934111393" lastFinishedPulling="2025-12-09 14:37:35.712541328 +0000 UTC m=+887.608743464" observedRunningTime="2025-12-09 14:37:36.585923375 +0000 UTC m=+888.482125511" watchObservedRunningTime="2025-12-09 14:37:36.591943731 +0000 UTC m=+888.488145877" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.038844 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp"] Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.040461 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.047575 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.049421 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp"] Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.124456 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd566\" (UniqueName: \"kubernetes.io/projected/57f2139a-7f61-46a6-b130-cce8398d7211-kube-api-access-zd566\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.124516 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.124598 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.226300 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd566\" (UniqueName: \"kubernetes.io/projected/57f2139a-7f61-46a6-b130-cce8398d7211-kube-api-access-zd566\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.226371 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.226425 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.227030 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.227078 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.251072 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd566\" (UniqueName: \"kubernetes.io/projected/57f2139a-7f61-46a6-b130-cce8398d7211-kube-api-access-zd566\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.353837 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.554898 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp"] Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.810160 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" event={"ID":"57f2139a-7f61-46a6-b130-cce8398d7211","Type":"ContainerStarted","Data":"82be2d7ca278b4f2095091d6049fcebcf4be491b1cb6387ff1b03d7c9da5ee83"} Dec 09 14:38:10 crc kubenswrapper[4770]: I1209 14:38:10.810494 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" event={"ID":"57f2139a-7f61-46a6-b130-cce8398d7211","Type":"ContainerStarted","Data":"30eba2cb1ba281f23b20c75007bbbf466e1a3238a11b3009f0197ddf099409b4"} Dec 09 14:38:11 crc kubenswrapper[4770]: I1209 14:38:11.827079 4770 generic.go:334] "Generic (PLEG): container finished" podID="57f2139a-7f61-46a6-b130-cce8398d7211" containerID="82be2d7ca278b4f2095091d6049fcebcf4be491b1cb6387ff1b03d7c9da5ee83" exitCode=0 Dec 09 14:38:11 crc kubenswrapper[4770]: I1209 14:38:11.827140 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" event={"ID":"57f2139a-7f61-46a6-b130-cce8398d7211","Type":"ContainerDied","Data":"82be2d7ca278b4f2095091d6049fcebcf4be491b1cb6387ff1b03d7c9da5ee83"} Dec 09 14:38:13 crc kubenswrapper[4770]: I1209 14:38:13.840334 4770 generic.go:334] "Generic (PLEG): container finished" podID="57f2139a-7f61-46a6-b130-cce8398d7211" containerID="d7136cbc533634911d14a63d7cda04b5141bd2f16c6c12f12b5b2a71c059595a" exitCode=0 Dec 09 14:38:13 crc kubenswrapper[4770]: I1209 14:38:13.840430 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" event={"ID":"57f2139a-7f61-46a6-b130-cce8398d7211","Type":"ContainerDied","Data":"d7136cbc533634911d14a63d7cda04b5141bd2f16c6c12f12b5b2a71c059595a"} Dec 09 14:38:14 crc kubenswrapper[4770]: I1209 14:38:14.851929 4770 generic.go:334] "Generic (PLEG): container finished" podID="57f2139a-7f61-46a6-b130-cce8398d7211" containerID="1cba0416179ee010ab5197ab43f70f5226792e8d5b18c3c9be97aa4f76f633c6" exitCode=0 Dec 09 14:38:14 crc kubenswrapper[4770]: I1209 14:38:14.852001 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" event={"ID":"57f2139a-7f61-46a6-b130-cce8398d7211","Type":"ContainerDied","Data":"1cba0416179ee010ab5197ab43f70f5226792e8d5b18c3c9be97aa4f76f633c6"} Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.083920 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.215321 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-bundle\") pod \"57f2139a-7f61-46a6-b130-cce8398d7211\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.215541 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-util\") pod \"57f2139a-7f61-46a6-b130-cce8398d7211\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.215605 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd566\" (UniqueName: \"kubernetes.io/projected/57f2139a-7f61-46a6-b130-cce8398d7211-kube-api-access-zd566\") pod \"57f2139a-7f61-46a6-b130-cce8398d7211\" (UID: \"57f2139a-7f61-46a6-b130-cce8398d7211\") " Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.216404 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-bundle" (OuterVolumeSpecName: "bundle") pod "57f2139a-7f61-46a6-b130-cce8398d7211" (UID: "57f2139a-7f61-46a6-b130-cce8398d7211"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.221656 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.252040 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f2139a-7f61-46a6-b130-cce8398d7211-kube-api-access-zd566" (OuterVolumeSpecName: "kube-api-access-zd566") pod "57f2139a-7f61-46a6-b130-cce8398d7211" (UID: "57f2139a-7f61-46a6-b130-cce8398d7211"). InnerVolumeSpecName "kube-api-access-zd566". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.322534 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd566\" (UniqueName: \"kubernetes.io/projected/57f2139a-7f61-46a6-b130-cce8398d7211-kube-api-access-zd566\") on node \"crc\" DevicePath \"\"" Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.334873 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-util" (OuterVolumeSpecName: "util") pod "57f2139a-7f61-46a6-b130-cce8398d7211" (UID: "57f2139a-7f61-46a6-b130-cce8398d7211"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.424118 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f2139a-7f61-46a6-b130-cce8398d7211-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.868189 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" event={"ID":"57f2139a-7f61-46a6-b130-cce8398d7211","Type":"ContainerDied","Data":"30eba2cb1ba281f23b20c75007bbbf466e1a3238a11b3009f0197ddf099409b4"} Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.868246 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30eba2cb1ba281f23b20c75007bbbf466e1a3238a11b3009f0197ddf099409b4" Dec 09 14:38:16 crc kubenswrapper[4770]: I1209 14:38:16.868302 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.908522 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6"] Dec 09 14:38:21 crc kubenswrapper[4770]: E1209 14:38:21.909270 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2139a-7f61-46a6-b130-cce8398d7211" containerName="extract" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.909284 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2139a-7f61-46a6-b130-cce8398d7211" containerName="extract" Dec 09 14:38:21 crc kubenswrapper[4770]: E1209 14:38:21.909300 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2139a-7f61-46a6-b130-cce8398d7211" containerName="util" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.909306 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2139a-7f61-46a6-b130-cce8398d7211" containerName="util" Dec 09 14:38:21 crc kubenswrapper[4770]: E1209 14:38:21.909322 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2139a-7f61-46a6-b130-cce8398d7211" containerName="pull" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.909339 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2139a-7f61-46a6-b130-cce8398d7211" containerName="pull" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.909430 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f2139a-7f61-46a6-b130-cce8398d7211" containerName="extract" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.909841 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.911787 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.912882 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.913149 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bfq6g" Dec 09 14:38:21 crc kubenswrapper[4770]: I1209 14:38:21.922617 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6"] Dec 09 14:38:22 crc kubenswrapper[4770]: I1209 14:38:22.004277 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh56x\" (UniqueName: \"kubernetes.io/projected/c5ee01e4-882a-47a7-9050-ef1076cca725-kube-api-access-xh56x\") pod \"nmstate-operator-5b5b58f5c8-g49f6\" (UID: \"c5ee01e4-882a-47a7-9050-ef1076cca725\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6" Dec 09 14:38:22 crc kubenswrapper[4770]: I1209 14:38:22.105792 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh56x\" (UniqueName: \"kubernetes.io/projected/c5ee01e4-882a-47a7-9050-ef1076cca725-kube-api-access-xh56x\") pod \"nmstate-operator-5b5b58f5c8-g49f6\" (UID: \"c5ee01e4-882a-47a7-9050-ef1076cca725\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6" Dec 09 14:38:22 crc kubenswrapper[4770]: I1209 14:38:22.142623 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh56x\" (UniqueName: \"kubernetes.io/projected/c5ee01e4-882a-47a7-9050-ef1076cca725-kube-api-access-xh56x\") pod \"nmstate-operator-5b5b58f5c8-g49f6\" (UID: \"c5ee01e4-882a-47a7-9050-ef1076cca725\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6" Dec 09 14:38:22 crc kubenswrapper[4770]: I1209 14:38:22.264220 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6" Dec 09 14:38:22 crc kubenswrapper[4770]: I1209 14:38:22.469618 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6"] Dec 09 14:38:22 crc kubenswrapper[4770]: I1209 14:38:22.913303 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6" event={"ID":"c5ee01e4-882a-47a7-9050-ef1076cca725","Type":"ContainerStarted","Data":"1df47cefade35e53f752261fc8f6bc9d884bc0688e9f1d56c2393e969eba1ab6"} Dec 09 14:38:25 crc kubenswrapper[4770]: I1209 14:38:25.930515 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6" event={"ID":"c5ee01e4-882a-47a7-9050-ef1076cca725","Type":"ContainerStarted","Data":"9d9da87bb9ea30076042714026636e411de0aacc36564423ed54325660cfc9c6"} Dec 09 14:38:25 crc kubenswrapper[4770]: I1209 14:38:25.948517 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g49f6" podStartSLOduration=2.208907499 podStartE2EDuration="4.948495865s" podCreationTimestamp="2025-12-09 14:38:21 +0000 UTC" firstStartedPulling="2025-12-09 14:38:22.479337097 +0000 UTC m=+934.375539223" lastFinishedPulling="2025-12-09 14:38:25.218925453 +0000 UTC m=+937.115127589" observedRunningTime="2025-12-09 14:38:25.94470242 +0000 UTC m=+937.840904576" watchObservedRunningTime="2025-12-09 14:38:25.948495865 +0000 UTC m=+937.844698001" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.099547 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.101054 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.107799 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7wkvw" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.114798 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.115717 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.119630 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.120909 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.149239 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lg6q8"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.150021 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.167462 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.231460 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-ovs-socket\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.231520 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-nmstate-lock\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.231545 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwjg\" (UniqueName: \"kubernetes.io/projected/eb3ead0c-f191-4518-83b9-98216d653eba-kube-api-access-czwjg\") pod \"nmstate-metrics-7f946cbc9-82bn5\" (UID: \"eb3ead0c-f191-4518-83b9-98216d653eba\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.231678 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-j58vx\" (UID: \"b6014298-34b9-4a4e-8a5b-578bc2ae90d6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.231753 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-588gt\" (UniqueName: \"kubernetes.io/projected/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-kube-api-access-588gt\") pod \"nmstate-webhook-5f6d4c5ccb-j58vx\" (UID: \"b6014298-34b9-4a4e-8a5b-578bc2ae90d6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.231793 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-dbus-socket\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.231856 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2jth\" (UniqueName: \"kubernetes.io/projected/406c7a97-d031-4d16-a20d-050cba5596a5-kube-api-access-g2jth\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.277587 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.278525 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.281643 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.281649 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.291253 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6f7ws" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333422 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2jth\" (UniqueName: \"kubernetes.io/projected/406c7a97-d031-4d16-a20d-050cba5596a5-kube-api-access-g2jth\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333482 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-ovs-socket\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333540 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-nmstate-lock\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333562 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czwjg\" (UniqueName: \"kubernetes.io/projected/eb3ead0c-f191-4518-83b9-98216d653eba-kube-api-access-czwjg\") pod \"nmstate-metrics-7f946cbc9-82bn5\" (UID: \"eb3ead0c-f191-4518-83b9-98216d653eba\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333593 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-j58vx\" (UID: \"b6014298-34b9-4a4e-8a5b-578bc2ae90d6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333610 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-588gt\" (UniqueName: \"kubernetes.io/projected/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-kube-api-access-588gt\") pod \"nmstate-webhook-5f6d4c5ccb-j58vx\" (UID: \"b6014298-34b9-4a4e-8a5b-578bc2ae90d6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-dbus-socket\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333672 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-nmstate-lock\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333760 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-ovs-socket\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: E1209 14:38:31.333819 4770 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.333896 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/406c7a97-d031-4d16-a20d-050cba5596a5-dbus-socket\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: E1209 14:38:31.334289 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-tls-key-pair podName:b6014298-34b9-4a4e-8a5b-578bc2ae90d6 nodeName:}" failed. No retries permitted until 2025-12-09 14:38:31.834218771 +0000 UTC m=+943.730420917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-j58vx" (UID: "b6014298-34b9-4a4e-8a5b-578bc2ae90d6") : secret "openshift-nmstate-webhook" not found Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.353446 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.364679 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czwjg\" (UniqueName: \"kubernetes.io/projected/eb3ead0c-f191-4518-83b9-98216d653eba-kube-api-access-czwjg\") pod \"nmstate-metrics-7f946cbc9-82bn5\" (UID: \"eb3ead0c-f191-4518-83b9-98216d653eba\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.365212 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-588gt\" (UniqueName: \"kubernetes.io/projected/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-kube-api-access-588gt\") pod \"nmstate-webhook-5f6d4c5ccb-j58vx\" (UID: \"b6014298-34b9-4a4e-8a5b-578bc2ae90d6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.374379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2jth\" (UniqueName: \"kubernetes.io/projected/406c7a97-d031-4d16-a20d-050cba5596a5-kube-api-access-g2jth\") pod \"nmstate-handler-lg6q8\" (UID: \"406c7a97-d031-4d16-a20d-050cba5596a5\") " pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.419124 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.434289 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.434380 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.434423 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks8bf\" (UniqueName: \"kubernetes.io/projected/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-kube-api-access-ks8bf\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.464540 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.518381 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-745bd78b79-v8zh6"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.519064 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.532059 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-745bd78b79-v8zh6"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.536229 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.536402 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks8bf\" (UniqueName: \"kubernetes.io/projected/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-kube-api-access-ks8bf\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.536516 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: E1209 14:38:31.536695 4770 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 09 14:38:31 crc kubenswrapper[4770]: E1209 14:38:31.536760 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-plugin-serving-cert podName:a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9 nodeName:}" failed. No retries permitted until 2025-12-09 14:38:32.036742926 +0000 UTC m=+943.932945062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-g7sx7" (UID: "a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9") : secret "plugin-serving-cert" not found Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.537160 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.559812 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks8bf\" (UniqueName: \"kubernetes.io/projected/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-kube-api-access-ks8bf\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.637569 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-service-ca\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.637624 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-trusted-ca-bundle\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.637653 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6rsw\" (UniqueName: \"kubernetes.io/projected/d9076f3b-abcd-4eda-ba3d-f845b051df62-kube-api-access-w6rsw\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.637736 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-oauth-config\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.637754 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-serving-cert\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.637804 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-oauth-serving-cert\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.637924 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-config\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.739606 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-trusted-ca-bundle\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.739670 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6rsw\" (UniqueName: \"kubernetes.io/projected/d9076f3b-abcd-4eda-ba3d-f845b051df62-kube-api-access-w6rsw\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.739720 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-oauth-config\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.739768 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-serving-cert\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.739832 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-oauth-serving-cert\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.739865 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-config\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.739895 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-service-ca\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.740941 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-service-ca\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.741194 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-trusted-ca-bundle\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.741981 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-oauth-serving-cert\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.743695 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-config\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.748292 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-oauth-config\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.748305 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9076f3b-abcd-4eda-ba3d-f845b051df62-console-serving-cert\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.755449 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6rsw\" (UniqueName: \"kubernetes.io/projected/d9076f3b-abcd-4eda-ba3d-f845b051df62-kube-api-access-w6rsw\") pod \"console-745bd78b79-v8zh6\" (UID: \"d9076f3b-abcd-4eda-ba3d-f845b051df62\") " pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.834390 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.840911 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-j58vx\" (UID: \"b6014298-34b9-4a4e-8a5b-578bc2ae90d6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.845230 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6014298-34b9-4a4e-8a5b-578bc2ae90d6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-j58vx\" (UID: \"b6014298-34b9-4a4e-8a5b-578bc2ae90d6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.898425 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5"] Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.963985 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" event={"ID":"eb3ead0c-f191-4518-83b9-98216d653eba","Type":"ContainerStarted","Data":"f36bb499d58e3891ca17b3caa3fb038f72cbd09034e95521ae35b109968c7f3c"} Dec 09 14:38:31 crc kubenswrapper[4770]: I1209 14:38:31.965531 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lg6q8" event={"ID":"406c7a97-d031-4d16-a20d-050cba5596a5","Type":"ContainerStarted","Data":"e932736d46b693ba4902e7b2147b4eff1d8ae5f405f081b21b1c545ba3a916cf"} Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.034235 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.044106 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.054810 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-g7sx7\" (UID: \"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.058470 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-745bd78b79-v8zh6"] Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.192893 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.224227 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx"] Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.403434 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7"] Dec 09 14:38:32 crc kubenswrapper[4770]: W1209 14:38:32.468196 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda88f9c2f_0ff9_4cae_9dc3_541c94c1cdc9.slice/crio-03a4f71a0415a9c1de3a0337cb224968718687e1cc75b074b2a4e6dccc66a8b4 WatchSource:0}: Error finding container 03a4f71a0415a9c1de3a0337cb224968718687e1cc75b074b2a4e6dccc66a8b4: Status 404 returned error can't find the container with id 03a4f71a0415a9c1de3a0337cb224968718687e1cc75b074b2a4e6dccc66a8b4 Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.974415 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" event={"ID":"b6014298-34b9-4a4e-8a5b-578bc2ae90d6","Type":"ContainerStarted","Data":"ec949a283fe14a337c464f3220f758c266606449bcd85be09bb9d81841bf899c"} Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.976262 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-745bd78b79-v8zh6" event={"ID":"d9076f3b-abcd-4eda-ba3d-f845b051df62","Type":"ContainerStarted","Data":"b0d03e8f0f9763582c17bd65672e88db9c6fc108ac44f08ce6316a38db8acd28"} Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.976295 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-745bd78b79-v8zh6" event={"ID":"d9076f3b-abcd-4eda-ba3d-f845b051df62","Type":"ContainerStarted","Data":"4dda0c9ac05da856d499a37b48933248e3f8da0c1f8cf0642cd7aa600942dcf3"} Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.979259 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" event={"ID":"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9","Type":"ContainerStarted","Data":"03a4f71a0415a9c1de3a0337cb224968718687e1cc75b074b2a4e6dccc66a8b4"} Dec 09 14:38:32 crc kubenswrapper[4770]: I1209 14:38:32.996862 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-745bd78b79-v8zh6" podStartSLOduration=1.996837695 podStartE2EDuration="1.996837695s" podCreationTimestamp="2025-12-09 14:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:38:32.994945433 +0000 UTC m=+944.891147569" watchObservedRunningTime="2025-12-09 14:38:32.996837695 +0000 UTC m=+944.893039841" Dec 09 14:38:34 crc kubenswrapper[4770]: I1209 14:38:34.993279 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" event={"ID":"eb3ead0c-f191-4518-83b9-98216d653eba","Type":"ContainerStarted","Data":"a5441bb496a9824c36da2dce145a3e625a9f5df7151409f4d6a350bb01248321"} Dec 09 14:38:34 crc kubenswrapper[4770]: I1209 14:38:34.996320 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" event={"ID":"b6014298-34b9-4a4e-8a5b-578bc2ae90d6","Type":"ContainerStarted","Data":"186dfd14a02ada97c8c5dd22f0030073e65ac3cd02b827b77b242fd358b02e75"} Dec 09 14:38:34 crc kubenswrapper[4770]: I1209 14:38:34.996434 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:38:34 crc kubenswrapper[4770]: I1209 14:38:34.999083 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lg6q8" event={"ID":"406c7a97-d031-4d16-a20d-050cba5596a5","Type":"ContainerStarted","Data":"fcfa315a868557d9b0dc426b983c67d06e14798032f6b463fd77d642a11d5b8d"} Dec 09 14:38:34 crc kubenswrapper[4770]: I1209 14:38:34.999545 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:35 crc kubenswrapper[4770]: I1209 14:38:35.021090 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" podStartSLOduration=2.444260844 podStartE2EDuration="4.021069402s" podCreationTimestamp="2025-12-09 14:38:31 +0000 UTC" firstStartedPulling="2025-12-09 14:38:32.467040003 +0000 UTC m=+944.363242139" lastFinishedPulling="2025-12-09 14:38:34.043848561 +0000 UTC m=+945.940050697" observedRunningTime="2025-12-09 14:38:35.016017863 +0000 UTC m=+946.912220029" watchObservedRunningTime="2025-12-09 14:38:35.021069402 +0000 UTC m=+946.917271548" Dec 09 14:38:35 crc kubenswrapper[4770]: I1209 14:38:35.037164 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lg6q8" podStartSLOduration=1.491333333 podStartE2EDuration="4.037142666s" podCreationTimestamp="2025-12-09 14:38:31 +0000 UTC" firstStartedPulling="2025-12-09 14:38:31.49917435 +0000 UTC m=+943.395376486" lastFinishedPulling="2025-12-09 14:38:34.044983683 +0000 UTC m=+945.941185819" observedRunningTime="2025-12-09 14:38:35.033043002 +0000 UTC m=+946.929245148" watchObservedRunningTime="2025-12-09 14:38:35.037142666 +0000 UTC m=+946.933344802" Dec 09 14:38:36 crc kubenswrapper[4770]: I1209 14:38:36.007670 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" event={"ID":"a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9","Type":"ContainerStarted","Data":"32403b4c184c2d2a9ef5705d017b8e4f2bc2540ecbf86042f03ab79cc5f354bc"} Dec 09 14:38:36 crc kubenswrapper[4770]: I1209 14:38:36.020482 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-g7sx7" podStartSLOduration=2.413034103 podStartE2EDuration="5.020461295s" podCreationTimestamp="2025-12-09 14:38:31 +0000 UTC" firstStartedPulling="2025-12-09 14:38:32.470877149 +0000 UTC m=+944.367079285" lastFinishedPulling="2025-12-09 14:38:35.078304341 +0000 UTC m=+946.974506477" observedRunningTime="2025-12-09 14:38:36.018938063 +0000 UTC m=+947.915140209" watchObservedRunningTime="2025-12-09 14:38:36.020461295 +0000 UTC m=+947.916663421" Dec 09 14:38:37 crc kubenswrapper[4770]: I1209 14:38:37.016360 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" event={"ID":"eb3ead0c-f191-4518-83b9-98216d653eba","Type":"ContainerStarted","Data":"644f9e103ed9fdd1d79dcc164740fd69efb5f3621b5d3af60f5718c0bc7adbd4"} Dec 09 14:38:37 crc kubenswrapper[4770]: I1209 14:38:37.044906 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-82bn5" podStartSLOduration=1.8513921039999999 podStartE2EDuration="6.044871268s" podCreationTimestamp="2025-12-09 14:38:31 +0000 UTC" firstStartedPulling="2025-12-09 14:38:31.90161776 +0000 UTC m=+943.797819926" lastFinishedPulling="2025-12-09 14:38:36.095096954 +0000 UTC m=+947.991299090" observedRunningTime="2025-12-09 14:38:37.037244908 +0000 UTC m=+948.933447074" watchObservedRunningTime="2025-12-09 14:38:37.044871268 +0000 UTC m=+948.941073454" Dec 09 14:38:41 crc kubenswrapper[4770]: I1209 14:38:41.505915 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lg6q8" Dec 09 14:38:41 crc kubenswrapper[4770]: I1209 14:38:41.834957 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:41 crc kubenswrapper[4770]: I1209 14:38:41.834998 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:41 crc kubenswrapper[4770]: I1209 14:38:41.841940 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:42 crc kubenswrapper[4770]: I1209 14:38:42.061836 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-745bd78b79-v8zh6" Dec 09 14:38:42 crc kubenswrapper[4770]: I1209 14:38:42.143133 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x2gs6"] Dec 09 14:38:44 crc kubenswrapper[4770]: I1209 14:38:44.244002 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:38:44 crc kubenswrapper[4770]: I1209 14:38:44.244365 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:38:52 crc kubenswrapper[4770]: I1209 14:38:52.041979 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j58vx" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.205862 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x2gs6" podUID="c1822a0a-3dcd-455f-a11c-15c6171f2068" containerName="console" containerID="cri-o://bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7" gracePeriod=15 Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.219816 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2"] Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.222877 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.226343 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.247251 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2"] Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.392202 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pdvz\" (UniqueName: \"kubernetes.io/projected/20fc30df-3858-4501-ba57-581d2c933e56-kube-api-access-6pdvz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.392276 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.392300 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.493520 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pdvz\" (UniqueName: \"kubernetes.io/projected/20fc30df-3858-4501-ba57-581d2c933e56-kube-api-access-6pdvz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.493575 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.493627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.494185 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.494295 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.526468 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pdvz\" (UniqueName: \"kubernetes.io/projected/20fc30df-3858-4501-ba57-581d2c933e56-kube-api-access-6pdvz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.544627 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.566554 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x2gs6_c1822a0a-3dcd-455f-a11c-15c6171f2068/console/0.log" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.566683 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.601353 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-trusted-ca-bundle\") pod \"c1822a0a-3dcd-455f-a11c-15c6171f2068\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.601447 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgx95\" (UniqueName: \"kubernetes.io/projected/c1822a0a-3dcd-455f-a11c-15c6171f2068-kube-api-access-zgx95\") pod \"c1822a0a-3dcd-455f-a11c-15c6171f2068\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.601507 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-service-ca\") pod \"c1822a0a-3dcd-455f-a11c-15c6171f2068\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.601583 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-oauth-serving-cert\") pod \"c1822a0a-3dcd-455f-a11c-15c6171f2068\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.601631 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-config\") pod \"c1822a0a-3dcd-455f-a11c-15c6171f2068\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.601660 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-oauth-config\") pod \"c1822a0a-3dcd-455f-a11c-15c6171f2068\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.601698 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-serving-cert\") pod \"c1822a0a-3dcd-455f-a11c-15c6171f2068\" (UID: \"c1822a0a-3dcd-455f-a11c-15c6171f2068\") " Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.602903 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c1822a0a-3dcd-455f-a11c-15c6171f2068" (UID: "c1822a0a-3dcd-455f-a11c-15c6171f2068"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.602938 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-config" (OuterVolumeSpecName: "console-config") pod "c1822a0a-3dcd-455f-a11c-15c6171f2068" (UID: "c1822a0a-3dcd-455f-a11c-15c6171f2068"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.603621 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-service-ca" (OuterVolumeSpecName: "service-ca") pod "c1822a0a-3dcd-455f-a11c-15c6171f2068" (UID: "c1822a0a-3dcd-455f-a11c-15c6171f2068"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.603911 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c1822a0a-3dcd-455f-a11c-15c6171f2068" (UID: "c1822a0a-3dcd-455f-a11c-15c6171f2068"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.607268 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c1822a0a-3dcd-455f-a11c-15c6171f2068" (UID: "c1822a0a-3dcd-455f-a11c-15c6171f2068"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.607679 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c1822a0a-3dcd-455f-a11c-15c6171f2068" (UID: "c1822a0a-3dcd-455f-a11c-15c6171f2068"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.608586 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1822a0a-3dcd-455f-a11c-15c6171f2068-kube-api-access-zgx95" (OuterVolumeSpecName: "kube-api-access-zgx95") pod "c1822a0a-3dcd-455f-a11c-15c6171f2068" (UID: "c1822a0a-3dcd-455f-a11c-15c6171f2068"). InnerVolumeSpecName "kube-api-access-zgx95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.703609 4770 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.703663 4770 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.703709 4770 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.703741 4770 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1822a0a-3dcd-455f-a11c-15c6171f2068-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.703754 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.703764 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgx95\" (UniqueName: \"kubernetes.io/projected/c1822a0a-3dcd-455f-a11c-15c6171f2068-kube-api-access-zgx95\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.703778 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1822a0a-3dcd-455f-a11c-15c6171f2068-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:07 crc kubenswrapper[4770]: I1209 14:39:07.782232 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2"] Dec 09 14:39:07 crc kubenswrapper[4770]: W1209 14:39:07.785086 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20fc30df_3858_4501_ba57_581d2c933e56.slice/crio-f428c0f842dda67b1957c5b40673d8a7a6eba86cd12f46086be1c9f7b3952b55 WatchSource:0}: Error finding container f428c0f842dda67b1957c5b40673d8a7a6eba86cd12f46086be1c9f7b3952b55: Status 404 returned error can't find the container with id f428c0f842dda67b1957c5b40673d8a7a6eba86cd12f46086be1c9f7b3952b55 Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.256021 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x2gs6_c1822a0a-3dcd-455f-a11c-15c6171f2068/console/0.log" Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.256066 4770 generic.go:334] "Generic (PLEG): container finished" podID="c1822a0a-3dcd-455f-a11c-15c6171f2068" containerID="bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7" exitCode=2 Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.256126 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x2gs6" event={"ID":"c1822a0a-3dcd-455f-a11c-15c6171f2068","Type":"ContainerDied","Data":"bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7"} Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.256156 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x2gs6" event={"ID":"c1822a0a-3dcd-455f-a11c-15c6171f2068","Type":"ContainerDied","Data":"0d29c691627de94b79e48ba9db9a1fef7b1686dbec7cc21aae4fab2393152cb3"} Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.256177 4770 scope.go:117] "RemoveContainer" containerID="bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7" Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.256247 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x2gs6" Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.259125 4770 generic.go:334] "Generic (PLEG): container finished" podID="20fc30df-3858-4501-ba57-581d2c933e56" containerID="0af8807a0564231717062067ed1cfad5572cfd2eb973448806f17b57b7beee8a" exitCode=0 Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.259160 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" event={"ID":"20fc30df-3858-4501-ba57-581d2c933e56","Type":"ContainerDied","Data":"0af8807a0564231717062067ed1cfad5572cfd2eb973448806f17b57b7beee8a"} Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.259188 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" event={"ID":"20fc30df-3858-4501-ba57-581d2c933e56","Type":"ContainerStarted","Data":"f428c0f842dda67b1957c5b40673d8a7a6eba86cd12f46086be1c9f7b3952b55"} Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.275312 4770 scope.go:117] "RemoveContainer" containerID="bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7" Dec 09 14:39:08 crc kubenswrapper[4770]: E1209 14:39:08.276326 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7\": container with ID starting with bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7 not found: ID does not exist" containerID="bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7" Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.276362 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7"} err="failed to get container status \"bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7\": rpc error: code = NotFound desc = could not find container \"bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7\": container with ID starting with bb4c95484dd7e5f0de014c0861d5f18a0de6dd2cb9001d30b13ec11b8f7b97c7 not found: ID does not exist" Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.303216 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x2gs6"] Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.307642 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x2gs6"] Dec 09 14:39:08 crc kubenswrapper[4770]: I1209 14:39:08.597575 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1822a0a-3dcd-455f-a11c-15c6171f2068" path="/var/lib/kubelet/pods/c1822a0a-3dcd-455f-a11c-15c6171f2068/volumes" Dec 09 14:39:10 crc kubenswrapper[4770]: I1209 14:39:10.275363 4770 generic.go:334] "Generic (PLEG): container finished" podID="20fc30df-3858-4501-ba57-581d2c933e56" containerID="d6801bd0f902fba6e72fe64f57e63345a9ecd94c7f92cd842caef250ef27e496" exitCode=0 Dec 09 14:39:10 crc kubenswrapper[4770]: I1209 14:39:10.275418 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" event={"ID":"20fc30df-3858-4501-ba57-581d2c933e56","Type":"ContainerDied","Data":"d6801bd0f902fba6e72fe64f57e63345a9ecd94c7f92cd842caef250ef27e496"} Dec 09 14:39:11 crc kubenswrapper[4770]: I1209 14:39:11.283375 4770 generic.go:334] "Generic (PLEG): container finished" podID="20fc30df-3858-4501-ba57-581d2c933e56" containerID="f97e03adf8d4bfc3f767a8533e05f754264c9cae696a474caa7a1e9aa9e098c0" exitCode=0 Dec 09 14:39:11 crc kubenswrapper[4770]: I1209 14:39:11.283478 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" event={"ID":"20fc30df-3858-4501-ba57-581d2c933e56","Type":"ContainerDied","Data":"f97e03adf8d4bfc3f767a8533e05f754264c9cae696a474caa7a1e9aa9e098c0"} Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.540758 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.659893 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-util\") pod \"20fc30df-3858-4501-ba57-581d2c933e56\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.659952 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-bundle\") pod \"20fc30df-3858-4501-ba57-581d2c933e56\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.659996 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pdvz\" (UniqueName: \"kubernetes.io/projected/20fc30df-3858-4501-ba57-581d2c933e56-kube-api-access-6pdvz\") pod \"20fc30df-3858-4501-ba57-581d2c933e56\" (UID: \"20fc30df-3858-4501-ba57-581d2c933e56\") " Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.660662 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-bundle" (OuterVolumeSpecName: "bundle") pod "20fc30df-3858-4501-ba57-581d2c933e56" (UID: "20fc30df-3858-4501-ba57-581d2c933e56"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.664915 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20fc30df-3858-4501-ba57-581d2c933e56-kube-api-access-6pdvz" (OuterVolumeSpecName: "kube-api-access-6pdvz") pod "20fc30df-3858-4501-ba57-581d2c933e56" (UID: "20fc30df-3858-4501-ba57-581d2c933e56"). InnerVolumeSpecName "kube-api-access-6pdvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.674507 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-util" (OuterVolumeSpecName: "util") pod "20fc30df-3858-4501-ba57-581d2c933e56" (UID: "20fc30df-3858-4501-ba57-581d2c933e56"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.762533 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.762569 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pdvz\" (UniqueName: \"kubernetes.io/projected/20fc30df-3858-4501-ba57-581d2c933e56-kube-api-access-6pdvz\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:12 crc kubenswrapper[4770]: I1209 14:39:12.762581 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20fc30df-3858-4501-ba57-581d2c933e56-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:13 crc kubenswrapper[4770]: I1209 14:39:13.297876 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" event={"ID":"20fc30df-3858-4501-ba57-581d2c933e56","Type":"ContainerDied","Data":"f428c0f842dda67b1957c5b40673d8a7a6eba86cd12f46086be1c9f7b3952b55"} Dec 09 14:39:13 crc kubenswrapper[4770]: I1209 14:39:13.297922 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f428c0f842dda67b1957c5b40673d8a7a6eba86cd12f46086be1c9f7b3952b55" Dec 09 14:39:13 crc kubenswrapper[4770]: I1209 14:39:13.297933 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2" Dec 09 14:39:14 crc kubenswrapper[4770]: I1209 14:39:14.243478 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:39:14 crc kubenswrapper[4770]: I1209 14:39:14.243544 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.502764 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv"] Dec 09 14:39:24 crc kubenswrapper[4770]: E1209 14:39:24.503499 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fc30df-3858-4501-ba57-581d2c933e56" containerName="util" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.503512 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fc30df-3858-4501-ba57-581d2c933e56" containerName="util" Dec 09 14:39:24 crc kubenswrapper[4770]: E1209 14:39:24.503528 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fc30df-3858-4501-ba57-581d2c933e56" containerName="pull" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.503533 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fc30df-3858-4501-ba57-581d2c933e56" containerName="pull" Dec 09 14:39:24 crc kubenswrapper[4770]: E1209 14:39:24.503542 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1822a0a-3dcd-455f-a11c-15c6171f2068" containerName="console" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.503549 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1822a0a-3dcd-455f-a11c-15c6171f2068" containerName="console" Dec 09 14:39:24 crc kubenswrapper[4770]: E1209 14:39:24.503562 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fc30df-3858-4501-ba57-581d2c933e56" containerName="extract" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.503568 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fc30df-3858-4501-ba57-581d2c933e56" containerName="extract" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.503673 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fc30df-3858-4501-ba57-581d2c933e56" containerName="extract" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.503689 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1822a0a-3dcd-455f-a11c-15c6171f2068" containerName="console" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.504134 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.506342 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.506514 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pbwwx" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.508563 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.510682 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.512920 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.538885 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv"] Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.565766 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cztbb\" (UniqueName: \"kubernetes.io/projected/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-kube-api-access-cztbb\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.566064 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-webhook-cert\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.566225 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-apiservice-cert\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.666845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cztbb\" (UniqueName: \"kubernetes.io/projected/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-kube-api-access-cztbb\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.667174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-webhook-cert\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.667314 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-apiservice-cert\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.679775 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-apiservice-cert\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.679883 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-webhook-cert\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.685549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cztbb\" (UniqueName: \"kubernetes.io/projected/52ee6ee8-2d9b-47f4-b058-7c85c1673f97-kube-api-access-cztbb\") pod \"metallb-operator-controller-manager-699f6cb9bd-z46mv\" (UID: \"52ee6ee8-2d9b-47f4-b058-7c85c1673f97\") " pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.831498 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.989148 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9"] Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.990484 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.995179 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.995298 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-v2lc9" Dec 09 14:39:24 crc kubenswrapper[4770]: I1209 14:39:24.995670 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9"] Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:24.998439 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.071501 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6w6d\" (UniqueName: \"kubernetes.io/projected/a739688c-a6ba-4579-bbcf-7a13f53bc412-kube-api-access-n6w6d\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.071587 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a739688c-a6ba-4579-bbcf-7a13f53bc412-apiservice-cert\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.071611 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a739688c-a6ba-4579-bbcf-7a13f53bc412-webhook-cert\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.172452 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6w6d\" (UniqueName: \"kubernetes.io/projected/a739688c-a6ba-4579-bbcf-7a13f53bc412-kube-api-access-n6w6d\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.172536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a739688c-a6ba-4579-bbcf-7a13f53bc412-apiservice-cert\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.172565 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a739688c-a6ba-4579-bbcf-7a13f53bc412-webhook-cert\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.195258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a739688c-a6ba-4579-bbcf-7a13f53bc412-apiservice-cert\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.195626 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a739688c-a6ba-4579-bbcf-7a13f53bc412-webhook-cert\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.216177 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6w6d\" (UniqueName: \"kubernetes.io/projected/a739688c-a6ba-4579-bbcf-7a13f53bc412-kube-api-access-n6w6d\") pod \"metallb-operator-webhook-server-7d5876576f-fnms9\" (UID: \"a739688c-a6ba-4579-bbcf-7a13f53bc412\") " pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.288678 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv"] Dec 09 14:39:25 crc kubenswrapper[4770]: W1209 14:39:25.298840 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52ee6ee8_2d9b_47f4_b058_7c85c1673f97.slice/crio-eafb89885257ee7c81d3112e60bcdd1e528158fd5051863d04ededbd7a9caf33 WatchSource:0}: Error finding container eafb89885257ee7c81d3112e60bcdd1e528158fd5051863d04ededbd7a9caf33: Status 404 returned error can't find the container with id eafb89885257ee7c81d3112e60bcdd1e528158fd5051863d04ededbd7a9caf33 Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.329883 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.375568 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" event={"ID":"52ee6ee8-2d9b-47f4-b058-7c85c1673f97","Type":"ContainerStarted","Data":"eafb89885257ee7c81d3112e60bcdd1e528158fd5051863d04ededbd7a9caf33"} Dec 09 14:39:25 crc kubenswrapper[4770]: I1209 14:39:25.581092 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9"] Dec 09 14:39:25 crc kubenswrapper[4770]: W1209 14:39:25.583490 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda739688c_a6ba_4579_bbcf_7a13f53bc412.slice/crio-82f10e7501923ea55d88d038ae194cfaf751fda8ee8f640f5627c7e2f36cc19f WatchSource:0}: Error finding container 82f10e7501923ea55d88d038ae194cfaf751fda8ee8f640f5627c7e2f36cc19f: Status 404 returned error can't find the container with id 82f10e7501923ea55d88d038ae194cfaf751fda8ee8f640f5627c7e2f36cc19f Dec 09 14:39:26 crc kubenswrapper[4770]: I1209 14:39:26.383878 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" event={"ID":"a739688c-a6ba-4579-bbcf-7a13f53bc412","Type":"ContainerStarted","Data":"82f10e7501923ea55d88d038ae194cfaf751fda8ee8f640f5627c7e2f36cc19f"} Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.170459 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ln2hj"] Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.172632 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.180148 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ln2hj"] Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.199262 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-utilities\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.199333 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-catalog-content\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.199367 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drg87\" (UniqueName: \"kubernetes.io/projected/b0711d5e-a16f-4662-8b42-34e48e357c73-kube-api-access-drg87\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.300148 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-utilities\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.300212 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-catalog-content\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.300242 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drg87\" (UniqueName: \"kubernetes.io/projected/b0711d5e-a16f-4662-8b42-34e48e357c73-kube-api-access-drg87\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.301004 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-utilities\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.301317 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-catalog-content\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.334200 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drg87\" (UniqueName: \"kubernetes.io/projected/b0711d5e-a16f-4662-8b42-34e48e357c73-kube-api-access-drg87\") pod \"community-operators-ln2hj\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:27 crc kubenswrapper[4770]: I1209 14:39:27.502648 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:29 crc kubenswrapper[4770]: I1209 14:39:29.657680 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ln2hj"] Dec 09 14:39:30 crc kubenswrapper[4770]: I1209 14:39:30.419330 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" event={"ID":"52ee6ee8-2d9b-47f4-b058-7c85c1673f97","Type":"ContainerStarted","Data":"f525ea80ee31eaa6768185ecec651d3a0fd5d36345e85e88aaadd6448e76c644"} Dec 09 14:39:30 crc kubenswrapper[4770]: I1209 14:39:30.419927 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:39:30 crc kubenswrapper[4770]: I1209 14:39:30.450645 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" podStartSLOduration=2.446670836 podStartE2EDuration="6.450617829s" podCreationTimestamp="2025-12-09 14:39:24 +0000 UTC" firstStartedPulling="2025-12-09 14:39:25.30231011 +0000 UTC m=+997.198512246" lastFinishedPulling="2025-12-09 14:39:29.306257093 +0000 UTC m=+1001.202459239" observedRunningTime="2025-12-09 14:39:30.437652357 +0000 UTC m=+1002.333854493" watchObservedRunningTime="2025-12-09 14:39:30.450617829 +0000 UTC m=+1002.346819965" Dec 09 14:39:31 crc kubenswrapper[4770]: I1209 14:39:31.430160 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2hj" event={"ID":"b0711d5e-a16f-4662-8b42-34e48e357c73","Type":"ContainerStarted","Data":"cf0bd59197ec853b94f6bb33ae7ae529f09ebbffe6d83e3f049509dee08aa946"} Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.439645 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" event={"ID":"a739688c-a6ba-4579-bbcf-7a13f53bc412","Type":"ContainerStarted","Data":"e78a318bbc1a1c3707df8d5aca897ec939d0418388d66395bf86c87af97aaa40"} Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.440091 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.441061 4770 generic.go:334] "Generic (PLEG): container finished" podID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerID="d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165" exitCode=0 Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.441096 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2hj" event={"ID":"b0711d5e-a16f-4662-8b42-34e48e357c73","Type":"ContainerDied","Data":"d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165"} Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.523017 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" podStartSLOduration=2.760180782 podStartE2EDuration="8.522991676s" podCreationTimestamp="2025-12-09 14:39:24 +0000 UTC" firstStartedPulling="2025-12-09 14:39:25.585990254 +0000 UTC m=+997.482192390" lastFinishedPulling="2025-12-09 14:39:31.348801148 +0000 UTC m=+1003.245003284" observedRunningTime="2025-12-09 14:39:32.521817554 +0000 UTC m=+1004.418019700" watchObservedRunningTime="2025-12-09 14:39:32.522991676 +0000 UTC m=+1004.419193812" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.563082 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bkqwf"] Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.572571 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.573355 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkqwf"] Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.617600 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-catalog-content\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.617673 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt29t\" (UniqueName: \"kubernetes.io/projected/3537d264-a9ab-42c6-8528-c491277fe20f-kube-api-access-bt29t\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.617701 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-utilities\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.718489 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-catalog-content\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.718549 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt29t\" (UniqueName: \"kubernetes.io/projected/3537d264-a9ab-42c6-8528-c491277fe20f-kube-api-access-bt29t\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.718579 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-utilities\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.719096 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-catalog-content\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.719125 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-utilities\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.739055 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt29t\" (UniqueName: \"kubernetes.io/projected/3537d264-a9ab-42c6-8528-c491277fe20f-kube-api-access-bt29t\") pod \"certified-operators-bkqwf\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:32 crc kubenswrapper[4770]: I1209 14:39:32.916656 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:33 crc kubenswrapper[4770]: I1209 14:39:33.444622 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkqwf"] Dec 09 14:39:33 crc kubenswrapper[4770]: W1209 14:39:33.448658 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3537d264_a9ab_42c6_8528_c491277fe20f.slice/crio-f8805e47052c16b65eb98a47d7952f672426b0e713f61f17aff0887938a017bc WatchSource:0}: Error finding container f8805e47052c16b65eb98a47d7952f672426b0e713f61f17aff0887938a017bc: Status 404 returned error can't find the container with id f8805e47052c16b65eb98a47d7952f672426b0e713f61f17aff0887938a017bc Dec 09 14:39:33 crc kubenswrapper[4770]: I1209 14:39:33.449436 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2hj" event={"ID":"b0711d5e-a16f-4662-8b42-34e48e357c73","Type":"ContainerStarted","Data":"f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467"} Dec 09 14:39:34 crc kubenswrapper[4770]: I1209 14:39:34.458512 4770 generic.go:334] "Generic (PLEG): container finished" podID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerID="f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467" exitCode=0 Dec 09 14:39:34 crc kubenswrapper[4770]: I1209 14:39:34.458597 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2hj" event={"ID":"b0711d5e-a16f-4662-8b42-34e48e357c73","Type":"ContainerDied","Data":"f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467"} Dec 09 14:39:34 crc kubenswrapper[4770]: I1209 14:39:34.462321 4770 generic.go:334] "Generic (PLEG): container finished" podID="3537d264-a9ab-42c6-8528-c491277fe20f" containerID="e56999354a02b0435dbe4eb381f4c614f344629aed23c88e1af95dcce963d36e" exitCode=0 Dec 09 14:39:34 crc kubenswrapper[4770]: I1209 14:39:34.462445 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkqwf" event={"ID":"3537d264-a9ab-42c6-8528-c491277fe20f","Type":"ContainerDied","Data":"e56999354a02b0435dbe4eb381f4c614f344629aed23c88e1af95dcce963d36e"} Dec 09 14:39:34 crc kubenswrapper[4770]: I1209 14:39:34.463248 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkqwf" event={"ID":"3537d264-a9ab-42c6-8528-c491277fe20f","Type":"ContainerStarted","Data":"f8805e47052c16b65eb98a47d7952f672426b0e713f61f17aff0887938a017bc"} Dec 09 14:39:35 crc kubenswrapper[4770]: I1209 14:39:35.471649 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2hj" event={"ID":"b0711d5e-a16f-4662-8b42-34e48e357c73","Type":"ContainerStarted","Data":"614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883"} Dec 09 14:39:35 crc kubenswrapper[4770]: I1209 14:39:35.473705 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkqwf" event={"ID":"3537d264-a9ab-42c6-8528-c491277fe20f","Type":"ContainerStarted","Data":"1b3292d3ac8c6ba90d2bbe5a45354c5a2bca776f39a4e9ab2b3ff540207838f8"} Dec 09 14:39:35 crc kubenswrapper[4770]: I1209 14:39:35.520920 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ln2hj" podStartSLOduration=5.892236161 podStartE2EDuration="8.520882747s" podCreationTimestamp="2025-12-09 14:39:27 +0000 UTC" firstStartedPulling="2025-12-09 14:39:32.442467108 +0000 UTC m=+1004.338669244" lastFinishedPulling="2025-12-09 14:39:35.071113694 +0000 UTC m=+1006.967315830" observedRunningTime="2025-12-09 14:39:35.494272314 +0000 UTC m=+1007.390474450" watchObservedRunningTime="2025-12-09 14:39:35.520882747 +0000 UTC m=+1007.417084883" Dec 09 14:39:36 crc kubenswrapper[4770]: I1209 14:39:36.484100 4770 generic.go:334] "Generic (PLEG): container finished" podID="3537d264-a9ab-42c6-8528-c491277fe20f" containerID="1b3292d3ac8c6ba90d2bbe5a45354c5a2bca776f39a4e9ab2b3ff540207838f8" exitCode=0 Dec 09 14:39:36 crc kubenswrapper[4770]: I1209 14:39:36.484901 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkqwf" event={"ID":"3537d264-a9ab-42c6-8528-c491277fe20f","Type":"ContainerDied","Data":"1b3292d3ac8c6ba90d2bbe5a45354c5a2bca776f39a4e9ab2b3ff540207838f8"} Dec 09 14:39:37 crc kubenswrapper[4770]: I1209 14:39:37.503635 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:37 crc kubenswrapper[4770]: I1209 14:39:37.503984 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:37 crc kubenswrapper[4770]: I1209 14:39:37.549133 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:38 crc kubenswrapper[4770]: I1209 14:39:38.499615 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkqwf" event={"ID":"3537d264-a9ab-42c6-8528-c491277fe20f","Type":"ContainerStarted","Data":"167e879335bb45b3434e9df800f0f27ac23066cbe7ebaf9fa42e4873e9a73b6d"} Dec 09 14:39:38 crc kubenswrapper[4770]: I1209 14:39:38.521147 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bkqwf" podStartSLOduration=3.177207846 podStartE2EDuration="6.521132033s" podCreationTimestamp="2025-12-09 14:39:32 +0000 UTC" firstStartedPulling="2025-12-09 14:39:34.464537529 +0000 UTC m=+1006.360739675" lastFinishedPulling="2025-12-09 14:39:37.808461726 +0000 UTC m=+1009.704663862" observedRunningTime="2025-12-09 14:39:38.515114374 +0000 UTC m=+1010.411316510" watchObservedRunningTime="2025-12-09 14:39:38.521132033 +0000 UTC m=+1010.417334169" Dec 09 14:39:42 crc kubenswrapper[4770]: I1209 14:39:42.917416 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:42 crc kubenswrapper[4770]: I1209 14:39:42.917955 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:42 crc kubenswrapper[4770]: I1209 14:39:42.962094 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:43 crc kubenswrapper[4770]: I1209 14:39:43.633517 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:44 crc kubenswrapper[4770]: I1209 14:39:44.246206 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:39:44 crc kubenswrapper[4770]: I1209 14:39:44.246266 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:39:44 crc kubenswrapper[4770]: I1209 14:39:44.246315 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:39:44 crc kubenswrapper[4770]: I1209 14:39:44.246874 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"852cc9377060614876d64ddd5f4b7f4f0b5e6e1aa2ca6cd6b6ccad59c9c4ae81"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:39:44 crc kubenswrapper[4770]: I1209 14:39:44.246918 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://852cc9377060614876d64ddd5f4b7f4f0b5e6e1aa2ca6cd6b6ccad59c9c4ae81" gracePeriod=600 Dec 09 14:39:45 crc kubenswrapper[4770]: I1209 14:39:45.554418 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkqwf"] Dec 09 14:39:45 crc kubenswrapper[4770]: I1209 14:39:45.621922 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7d5876576f-fnms9" Dec 09 14:39:45 crc kubenswrapper[4770]: I1209 14:39:45.624563 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bkqwf" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" containerName="registry-server" containerID="cri-o://167e879335bb45b3434e9df800f0f27ac23066cbe7ebaf9fa42e4873e9a73b6d" gracePeriod=2 Dec 09 14:39:46 crc kubenswrapper[4770]: I1209 14:39:46.634089 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="852cc9377060614876d64ddd5f4b7f4f0b5e6e1aa2ca6cd6b6ccad59c9c4ae81" exitCode=0 Dec 09 14:39:46 crc kubenswrapper[4770]: I1209 14:39:46.634183 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"852cc9377060614876d64ddd5f4b7f4f0b5e6e1aa2ca6cd6b6ccad59c9c4ae81"} Dec 09 14:39:46 crc kubenswrapper[4770]: I1209 14:39:46.634451 4770 scope.go:117] "RemoveContainer" containerID="7a2d7351a37474ac1217b67bfcec9a9843ac24bf4743146b36e820021a607f34" Dec 09 14:39:46 crc kubenswrapper[4770]: I1209 14:39:46.637926 4770 generic.go:334] "Generic (PLEG): container finished" podID="3537d264-a9ab-42c6-8528-c491277fe20f" containerID="167e879335bb45b3434e9df800f0f27ac23066cbe7ebaf9fa42e4873e9a73b6d" exitCode=0 Dec 09 14:39:46 crc kubenswrapper[4770]: I1209 14:39:46.637973 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkqwf" event={"ID":"3537d264-a9ab-42c6-8528-c491277fe20f","Type":"ContainerDied","Data":"167e879335bb45b3434e9df800f0f27ac23066cbe7ebaf9fa42e4873e9a73b6d"} Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.081896 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.154397 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-utilities\") pod \"3537d264-a9ab-42c6-8528-c491277fe20f\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.154524 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-catalog-content\") pod \"3537d264-a9ab-42c6-8528-c491277fe20f\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.154604 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt29t\" (UniqueName: \"kubernetes.io/projected/3537d264-a9ab-42c6-8528-c491277fe20f-kube-api-access-bt29t\") pod \"3537d264-a9ab-42c6-8528-c491277fe20f\" (UID: \"3537d264-a9ab-42c6-8528-c491277fe20f\") " Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.155894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-utilities" (OuterVolumeSpecName: "utilities") pod "3537d264-a9ab-42c6-8528-c491277fe20f" (UID: "3537d264-a9ab-42c6-8528-c491277fe20f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.178978 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3537d264-a9ab-42c6-8528-c491277fe20f-kube-api-access-bt29t" (OuterVolumeSpecName: "kube-api-access-bt29t") pod "3537d264-a9ab-42c6-8528-c491277fe20f" (UID: "3537d264-a9ab-42c6-8528-c491277fe20f"). InnerVolumeSpecName "kube-api-access-bt29t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.237199 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3537d264-a9ab-42c6-8528-c491277fe20f" (UID: "3537d264-a9ab-42c6-8528-c491277fe20f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.256029 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.256304 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3537d264-a9ab-42c6-8528-c491277fe20f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.256314 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt29t\" (UniqueName: \"kubernetes.io/projected/3537d264-a9ab-42c6-8528-c491277fe20f-kube-api-access-bt29t\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.552292 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.646201 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"0f16eaf98d6441c99fac37159c836b0846fa6ac7bd81ba244c2067e5f830e8c2"} Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.648448 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkqwf" event={"ID":"3537d264-a9ab-42c6-8528-c491277fe20f","Type":"ContainerDied","Data":"f8805e47052c16b65eb98a47d7952f672426b0e713f61f17aff0887938a017bc"} Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.648489 4770 scope.go:117] "RemoveContainer" containerID="167e879335bb45b3434e9df800f0f27ac23066cbe7ebaf9fa42e4873e9a73b6d" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.648590 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkqwf" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.667488 4770 scope.go:117] "RemoveContainer" containerID="1b3292d3ac8c6ba90d2bbe5a45354c5a2bca776f39a4e9ab2b3ff540207838f8" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.684826 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkqwf"] Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.687296 4770 scope.go:117] "RemoveContainer" containerID="e56999354a02b0435dbe4eb381f4c614f344629aed23c88e1af95dcce963d36e" Dec 09 14:39:47 crc kubenswrapper[4770]: I1209 14:39:47.689557 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bkqwf"] Dec 09 14:39:48 crc kubenswrapper[4770]: I1209 14:39:48.599928 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" path="/var/lib/kubelet/pods/3537d264-a9ab-42c6-8528-c491277fe20f/volumes" Dec 09 14:39:51 crc kubenswrapper[4770]: I1209 14:39:51.156442 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ln2hj"] Dec 09 14:39:51 crc kubenswrapper[4770]: I1209 14:39:51.157316 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ln2hj" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerName="registry-server" containerID="cri-o://614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883" gracePeriod=2 Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.085855 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.250037 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drg87\" (UniqueName: \"kubernetes.io/projected/b0711d5e-a16f-4662-8b42-34e48e357c73-kube-api-access-drg87\") pod \"b0711d5e-a16f-4662-8b42-34e48e357c73\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.250089 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-utilities\") pod \"b0711d5e-a16f-4662-8b42-34e48e357c73\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.250123 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-catalog-content\") pod \"b0711d5e-a16f-4662-8b42-34e48e357c73\" (UID: \"b0711d5e-a16f-4662-8b42-34e48e357c73\") " Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.253280 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-utilities" (OuterVolumeSpecName: "utilities") pod "b0711d5e-a16f-4662-8b42-34e48e357c73" (UID: "b0711d5e-a16f-4662-8b42-34e48e357c73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.259530 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0711d5e-a16f-4662-8b42-34e48e357c73-kube-api-access-drg87" (OuterVolumeSpecName: "kube-api-access-drg87") pod "b0711d5e-a16f-4662-8b42-34e48e357c73" (UID: "b0711d5e-a16f-4662-8b42-34e48e357c73"). InnerVolumeSpecName "kube-api-access-drg87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.308832 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0711d5e-a16f-4662-8b42-34e48e357c73" (UID: "b0711d5e-a16f-4662-8b42-34e48e357c73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.351899 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drg87\" (UniqueName: \"kubernetes.io/projected/b0711d5e-a16f-4662-8b42-34e48e357c73-kube-api-access-drg87\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.351940 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.351950 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0711d5e-a16f-4662-8b42-34e48e357c73-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.691947 4770 generic.go:334] "Generic (PLEG): container finished" podID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerID="614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883" exitCode=0 Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.692014 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2hj" event={"ID":"b0711d5e-a16f-4662-8b42-34e48e357c73","Type":"ContainerDied","Data":"614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883"} Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.692542 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2hj" event={"ID":"b0711d5e-a16f-4662-8b42-34e48e357c73","Type":"ContainerDied","Data":"cf0bd59197ec853b94f6bb33ae7ae529f09ebbffe6d83e3f049509dee08aa946"} Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.692615 4770 scope.go:117] "RemoveContainer" containerID="614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.692062 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2hj" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.709765 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ln2hj"] Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.713978 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ln2hj"] Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.717501 4770 scope.go:117] "RemoveContainer" containerID="f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.738399 4770 scope.go:117] "RemoveContainer" containerID="d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.755210 4770 scope.go:117] "RemoveContainer" containerID="614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883" Dec 09 14:39:52 crc kubenswrapper[4770]: E1209 14:39:52.755630 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883\": container with ID starting with 614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883 not found: ID does not exist" containerID="614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.755779 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883"} err="failed to get container status \"614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883\": rpc error: code = NotFound desc = could not find container \"614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883\": container with ID starting with 614c3cc7af7f258476deedfa55fb38d8dc0d6fe1724506e0e4416edf69c7d883 not found: ID does not exist" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.755895 4770 scope.go:117] "RemoveContainer" containerID="f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467" Dec 09 14:39:52 crc kubenswrapper[4770]: E1209 14:39:52.756386 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467\": container with ID starting with f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467 not found: ID does not exist" containerID="f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.756417 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467"} err="failed to get container status \"f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467\": rpc error: code = NotFound desc = could not find container \"f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467\": container with ID starting with f4428d3741231e00b78dc3e7d8d82c8f867f556598a962a05b2d547067941467 not found: ID does not exist" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.756438 4770 scope.go:117] "RemoveContainer" containerID="d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165" Dec 09 14:39:52 crc kubenswrapper[4770]: E1209 14:39:52.756704 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165\": container with ID starting with d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165 not found: ID does not exist" containerID="d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165" Dec 09 14:39:52 crc kubenswrapper[4770]: I1209 14:39:52.756822 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165"} err="failed to get container status \"d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165\": rpc error: code = NotFound desc = could not find container \"d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165\": container with ID starting with d214f123f677281119f17bcaf17f6560ad557430a6b6b42eff394d75bfdc9165 not found: ID does not exist" Dec 09 14:39:54 crc kubenswrapper[4770]: I1209 14:39:54.596259 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" path="/var/lib/kubelet/pods/b0711d5e-a16f-4662-8b42-34e48e357c73/volumes" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.286262 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5jqgq"] Dec 09 14:40:00 crc kubenswrapper[4770]: E1209 14:40:00.287021 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerName="extract-utilities" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.287034 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerName="extract-utilities" Dec 09 14:40:00 crc kubenswrapper[4770]: E1209 14:40:00.287066 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerName="extract-content" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.287072 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerName="extract-content" Dec 09 14:40:00 crc kubenswrapper[4770]: E1209 14:40:00.287082 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerName="registry-server" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.287089 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerName="registry-server" Dec 09 14:40:00 crc kubenswrapper[4770]: E1209 14:40:00.287099 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" containerName="extract-utilities" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.287104 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" containerName="extract-utilities" Dec 09 14:40:00 crc kubenswrapper[4770]: E1209 14:40:00.287115 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" containerName="registry-server" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.287121 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" containerName="registry-server" Dec 09 14:40:00 crc kubenswrapper[4770]: E1209 14:40:00.287131 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" containerName="extract-content" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.287138 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" containerName="extract-content" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.287246 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0711d5e-a16f-4662-8b42-34e48e357c73" containerName="registry-server" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.287258 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3537d264-a9ab-42c6-8528-c491277fe20f" containerName="registry-server" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.288139 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.301328 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jqgq"] Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.357102 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-utilities\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.357198 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-catalog-content\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.357231 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s44cv\" (UniqueName: \"kubernetes.io/projected/b7d14273-442f-40b8-a209-f5e72c7c8114-kube-api-access-s44cv\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.458350 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-utilities\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.458456 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-catalog-content\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.458493 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s44cv\" (UniqueName: \"kubernetes.io/projected/b7d14273-442f-40b8-a209-f5e72c7c8114-kube-api-access-s44cv\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.459078 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-catalog-content\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.459227 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-utilities\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.477486 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s44cv\" (UniqueName: \"kubernetes.io/projected/b7d14273-442f-40b8-a209-f5e72c7c8114-kube-api-access-s44cv\") pod \"redhat-marketplace-5jqgq\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:00 crc kubenswrapper[4770]: I1209 14:40:00.605943 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:01 crc kubenswrapper[4770]: I1209 14:40:01.115649 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jqgq"] Dec 09 14:40:01 crc kubenswrapper[4770]: I1209 14:40:01.763002 4770 generic.go:334] "Generic (PLEG): container finished" podID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerID="168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00" exitCode=0 Dec 09 14:40:01 crc kubenswrapper[4770]: I1209 14:40:01.763042 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jqgq" event={"ID":"b7d14273-442f-40b8-a209-f5e72c7c8114","Type":"ContainerDied","Data":"168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00"} Dec 09 14:40:01 crc kubenswrapper[4770]: I1209 14:40:01.763069 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jqgq" event={"ID":"b7d14273-442f-40b8-a209-f5e72c7c8114","Type":"ContainerStarted","Data":"b50a4d0a1ddfbaa94cfd867154cc86e28519de2ead4b92fabaed06b615aa5aed"} Dec 09 14:40:02 crc kubenswrapper[4770]: I1209 14:40:02.772241 4770 generic.go:334] "Generic (PLEG): container finished" podID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerID="ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e" exitCode=0 Dec 09 14:40:02 crc kubenswrapper[4770]: I1209 14:40:02.772343 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jqgq" event={"ID":"b7d14273-442f-40b8-a209-f5e72c7c8114","Type":"ContainerDied","Data":"ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e"} Dec 09 14:40:03 crc kubenswrapper[4770]: I1209 14:40:03.781750 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jqgq" event={"ID":"b7d14273-442f-40b8-a209-f5e72c7c8114","Type":"ContainerStarted","Data":"d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2"} Dec 09 14:40:04 crc kubenswrapper[4770]: I1209 14:40:04.835186 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-699f6cb9bd-z46mv" Dec 09 14:40:04 crc kubenswrapper[4770]: I1209 14:40:04.859291 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5jqgq" podStartSLOduration=3.485476491 podStartE2EDuration="4.859269215s" podCreationTimestamp="2025-12-09 14:40:00 +0000 UTC" firstStartedPulling="2025-12-09 14:40:01.765536568 +0000 UTC m=+1033.661738704" lastFinishedPulling="2025-12-09 14:40:03.139329292 +0000 UTC m=+1035.035531428" observedRunningTime="2025-12-09 14:40:03.811481677 +0000 UTC m=+1035.707683823" watchObservedRunningTime="2025-12-09 14:40:04.859269215 +0000 UTC m=+1036.755471351" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.587274 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9c7pl"] Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.590157 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.593152 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.594294 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.594450 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2kvvb" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.604464 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79"] Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.605615 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.606839 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.623470 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79"] Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.698000 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-b524m"] Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.699282 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.704069 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.704094 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.704247 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-c4gxs" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.704810 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.705748 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-2qk72"] Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.707046 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.708836 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.718174 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-2qk72"] Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749303 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctjbr\" (UniqueName: \"kubernetes.io/projected/0355fd4f-c3df-4794-a794-567f018e52fa-kube-api-access-ctjbr\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749365 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-frr-sockets\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749384 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-frr-conf\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749487 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35cc164c-de5c-43cd-8a8d-38684a35227e-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4tn79\" (UID: \"35cc164c-de5c-43cd-8a8d-38684a35227e\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749528 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0355fd4f-c3df-4794-a794-567f018e52fa-frr-startup\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749575 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4bg\" (UniqueName: \"kubernetes.io/projected/35cc164c-de5c-43cd-8a8d-38684a35227e-kube-api-access-tx4bg\") pod \"frr-k8s-webhook-server-7fcb986d4-4tn79\" (UID: \"35cc164c-de5c-43cd-8a8d-38684a35227e\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749596 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-metrics\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749651 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-reloader\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.749677 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0355fd4f-c3df-4794-a794-567f018e52fa-metrics-certs\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850399 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-cert\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850439 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d5627024-8337-474b-b04c-95c60e08308e-metallb-excludel2\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850455 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7rgc\" (UniqueName: \"kubernetes.io/projected/d5627024-8337-474b-b04c-95c60e08308e-kube-api-access-l7rgc\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850472 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-metrics-certs\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850490 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850548 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldqc\" (UniqueName: \"kubernetes.io/projected/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-kube-api-access-7ldqc\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850604 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-reloader\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850636 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0355fd4f-c3df-4794-a794-567f018e52fa-metrics-certs\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850676 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctjbr\" (UniqueName: \"kubernetes.io/projected/0355fd4f-c3df-4794-a794-567f018e52fa-kube-api-access-ctjbr\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850719 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-frr-sockets\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850756 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-frr-conf\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850783 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-metrics-certs\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850806 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35cc164c-de5c-43cd-8a8d-38684a35227e-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4tn79\" (UID: \"35cc164c-de5c-43cd-8a8d-38684a35227e\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850825 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0355fd4f-c3df-4794-a794-567f018e52fa-frr-startup\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850852 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4bg\" (UniqueName: \"kubernetes.io/projected/35cc164c-de5c-43cd-8a8d-38684a35227e-kube-api-access-tx4bg\") pod \"frr-k8s-webhook-server-7fcb986d4-4tn79\" (UID: \"35cc164c-de5c-43cd-8a8d-38684a35227e\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.850876 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-metrics\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.851300 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-metrics\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.851509 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-reloader\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: E1209 14:40:05.851758 4770 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 09 14:40:05 crc kubenswrapper[4770]: E1209 14:40:05.851853 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35cc164c-de5c-43cd-8a8d-38684a35227e-cert podName:35cc164c-de5c-43cd-8a8d-38684a35227e nodeName:}" failed. No retries permitted until 2025-12-09 14:40:06.351829881 +0000 UTC m=+1038.248032087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/35cc164c-de5c-43cd-8a8d-38684a35227e-cert") pod "frr-k8s-webhook-server-7fcb986d4-4tn79" (UID: "35cc164c-de5c-43cd-8a8d-38684a35227e") : secret "frr-k8s-webhook-server-cert" not found Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.852249 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-frr-conf\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.852575 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0355fd4f-c3df-4794-a794-567f018e52fa-frr-sockets\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.852770 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0355fd4f-c3df-4794-a794-567f018e52fa-frr-startup\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.866204 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0355fd4f-c3df-4794-a794-567f018e52fa-metrics-certs\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.875405 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4bg\" (UniqueName: \"kubernetes.io/projected/35cc164c-de5c-43cd-8a8d-38684a35227e-kube-api-access-tx4bg\") pod \"frr-k8s-webhook-server-7fcb986d4-4tn79\" (UID: \"35cc164c-de5c-43cd-8a8d-38684a35227e\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.890064 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctjbr\" (UniqueName: \"kubernetes.io/projected/0355fd4f-c3df-4794-a794-567f018e52fa-kube-api-access-ctjbr\") pod \"frr-k8s-9c7pl\" (UID: \"0355fd4f-c3df-4794-a794-567f018e52fa\") " pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.904360 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.952333 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-metrics-certs\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.952851 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-cert\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.952872 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d5627024-8337-474b-b04c-95c60e08308e-metallb-excludel2\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.952888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7rgc\" (UniqueName: \"kubernetes.io/projected/d5627024-8337-474b-b04c-95c60e08308e-kube-api-access-l7rgc\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.952903 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-metrics-certs\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.952917 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.952934 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldqc\" (UniqueName: \"kubernetes.io/projected/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-kube-api-access-7ldqc\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:05 crc kubenswrapper[4770]: E1209 14:40:05.953021 4770 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 09 14:40:05 crc kubenswrapper[4770]: E1209 14:40:05.953087 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-metrics-certs podName:c30d2bd5-635c-4c4b-bd25-c0a0d91009ca nodeName:}" failed. No retries permitted until 2025-12-09 14:40:06.453068549 +0000 UTC m=+1038.349270675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-metrics-certs") pod "controller-f8648f98b-2qk72" (UID: "c30d2bd5-635c-4c4b-bd25-c0a0d91009ca") : secret "controller-certs-secret" not found Dec 09 14:40:05 crc kubenswrapper[4770]: E1209 14:40:05.953310 4770 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 14:40:05 crc kubenswrapper[4770]: E1209 14:40:05.953362 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist podName:d5627024-8337-474b-b04c-95c60e08308e nodeName:}" failed. No retries permitted until 2025-12-09 14:40:06.453348567 +0000 UTC m=+1038.349550703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist") pod "speaker-b524m" (UID: "d5627024-8337-474b-b04c-95c60e08308e") : secret "metallb-memberlist" not found Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.954058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d5627024-8337-474b-b04c-95c60e08308e-metallb-excludel2\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.957170 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.957427 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-metrics-certs\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.968250 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-cert\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.972355 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7rgc\" (UniqueName: \"kubernetes.io/projected/d5627024-8337-474b-b04c-95c60e08308e-kube-api-access-l7rgc\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:05 crc kubenswrapper[4770]: I1209 14:40:05.976013 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldqc\" (UniqueName: \"kubernetes.io/projected/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-kube-api-access-7ldqc\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.358833 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35cc164c-de5c-43cd-8a8d-38684a35227e-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4tn79\" (UID: \"35cc164c-de5c-43cd-8a8d-38684a35227e\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.362111 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35cc164c-de5c-43cd-8a8d-38684a35227e-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4tn79\" (UID: \"35cc164c-de5c-43cd-8a8d-38684a35227e\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.460143 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.460200 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-metrics-certs\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:06 crc kubenswrapper[4770]: E1209 14:40:06.460388 4770 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 14:40:06 crc kubenswrapper[4770]: E1209 14:40:06.460484 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist podName:d5627024-8337-474b-b04c-95c60e08308e nodeName:}" failed. No retries permitted until 2025-12-09 14:40:07.460459882 +0000 UTC m=+1039.356662048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist") pod "speaker-b524m" (UID: "d5627024-8337-474b-b04c-95c60e08308e") : secret "metallb-memberlist" not found Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.463531 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c30d2bd5-635c-4c4b-bd25-c0a0d91009ca-metrics-certs\") pod \"controller-f8648f98b-2qk72\" (UID: \"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca\") " pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.526940 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.625147 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.799550 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerStarted","Data":"b269f4840ff0fb4ba21d18a6e2ad475cdfdedfcfc7a69e09de4651c3426efc12"} Dec 09 14:40:06 crc kubenswrapper[4770]: I1209 14:40:06.995652 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79"] Dec 09 14:40:07 crc kubenswrapper[4770]: I1209 14:40:07.051192 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-2qk72"] Dec 09 14:40:07 crc kubenswrapper[4770]: W1209 14:40:07.053619 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc30d2bd5_635c_4c4b_bd25_c0a0d91009ca.slice/crio-bfe5e8c134a927845c06bab365787cea6386fdb7d4ae3a3e975b5aa63acd21a4 WatchSource:0}: Error finding container bfe5e8c134a927845c06bab365787cea6386fdb7d4ae3a3e975b5aa63acd21a4: Status 404 returned error can't find the container with id bfe5e8c134a927845c06bab365787cea6386fdb7d4ae3a3e975b5aa63acd21a4 Dec 09 14:40:07 crc kubenswrapper[4770]: I1209 14:40:07.485533 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:07 crc kubenswrapper[4770]: E1209 14:40:07.485833 4770 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 14:40:07 crc kubenswrapper[4770]: E1209 14:40:07.485962 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist podName:d5627024-8337-474b-b04c-95c60e08308e nodeName:}" failed. No retries permitted until 2025-12-09 14:40:09.485936127 +0000 UTC m=+1041.382138263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist") pod "speaker-b524m" (UID: "d5627024-8337-474b-b04c-95c60e08308e") : secret "metallb-memberlist" not found Dec 09 14:40:07 crc kubenswrapper[4770]: I1209 14:40:07.808718 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" event={"ID":"35cc164c-de5c-43cd-8a8d-38684a35227e","Type":"ContainerStarted","Data":"cf060e16abca2b104390829b01296a3226f84437ed141542883a19581bff0b33"} Dec 09 14:40:07 crc kubenswrapper[4770]: I1209 14:40:07.811276 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-2qk72" event={"ID":"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca","Type":"ContainerStarted","Data":"30907987d28f65025c6d6d4d870c2f9c8d2a5ae50c575f9a9dcc2bcd89576a9b"} Dec 09 14:40:07 crc kubenswrapper[4770]: I1209 14:40:07.811336 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-2qk72" event={"ID":"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca","Type":"ContainerStarted","Data":"3486a4f2b644d527c252c3fb380cb8c91c45d4b0f9e1f0771d4cdd657ad35b72"} Dec 09 14:40:07 crc kubenswrapper[4770]: I1209 14:40:07.811346 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-2qk72" event={"ID":"c30d2bd5-635c-4c4b-bd25-c0a0d91009ca","Type":"ContainerStarted","Data":"bfe5e8c134a927845c06bab365787cea6386fdb7d4ae3a3e975b5aa63acd21a4"} Dec 09 14:40:07 crc kubenswrapper[4770]: I1209 14:40:07.811473 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:07 crc kubenswrapper[4770]: I1209 14:40:07.838549 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-2qk72" podStartSLOduration=2.838535897 podStartE2EDuration="2.838535897s" podCreationTimestamp="2025-12-09 14:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:40:07.833964419 +0000 UTC m=+1039.730166555" watchObservedRunningTime="2025-12-09 14:40:07.838535897 +0000 UTC m=+1039.734738033" Dec 09 14:40:09 crc kubenswrapper[4770]: I1209 14:40:09.516511 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:09 crc kubenswrapper[4770]: I1209 14:40:09.522229 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d5627024-8337-474b-b04c-95c60e08308e-memberlist\") pod \"speaker-b524m\" (UID: \"d5627024-8337-474b-b04c-95c60e08308e\") " pod="metallb-system/speaker-b524m" Dec 09 14:40:09 crc kubenswrapper[4770]: I1209 14:40:09.617303 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-b524m" Dec 09 14:40:09 crc kubenswrapper[4770]: I1209 14:40:09.830452 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b524m" event={"ID":"d5627024-8337-474b-b04c-95c60e08308e","Type":"ContainerStarted","Data":"078bd0c0002ae77c946261972f1415fe8b6037c33553df59a143e40b06e8e3e5"} Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.606526 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.606873 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.656645 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.843005 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b524m" event={"ID":"d5627024-8337-474b-b04c-95c60e08308e","Type":"ContainerStarted","Data":"e30746a2bee2edd083a389aaf5118bc32394d75fb6ace68a73057eec5c6b748f"} Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.843051 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b524m" event={"ID":"d5627024-8337-474b-b04c-95c60e08308e","Type":"ContainerStarted","Data":"b02954b9e5667e2d726e4b2461b8b4d49e4ee0ab898484355ee2abfe085803ef"} Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.843703 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-b524m" Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.868551 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-b524m" podStartSLOduration=5.868535723 podStartE2EDuration="5.868535723s" podCreationTimestamp="2025-12-09 14:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:40:10.866489107 +0000 UTC m=+1042.762691263" watchObservedRunningTime="2025-12-09 14:40:10.868535723 +0000 UTC m=+1042.764737859" Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.908164 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:10 crc kubenswrapper[4770]: I1209 14:40:10.954206 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jqgq"] Dec 09 14:40:12 crc kubenswrapper[4770]: I1209 14:40:12.859198 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5jqgq" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerName="registry-server" containerID="cri-o://d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2" gracePeriod=2 Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.567586 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.673182 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-catalog-content\") pod \"b7d14273-442f-40b8-a209-f5e72c7c8114\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.673249 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s44cv\" (UniqueName: \"kubernetes.io/projected/b7d14273-442f-40b8-a209-f5e72c7c8114-kube-api-access-s44cv\") pod \"b7d14273-442f-40b8-a209-f5e72c7c8114\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.673349 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-utilities\") pod \"b7d14273-442f-40b8-a209-f5e72c7c8114\" (UID: \"b7d14273-442f-40b8-a209-f5e72c7c8114\") " Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.678337 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-utilities" (OuterVolumeSpecName: "utilities") pod "b7d14273-442f-40b8-a209-f5e72c7c8114" (UID: "b7d14273-442f-40b8-a209-f5e72c7c8114"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.682479 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d14273-442f-40b8-a209-f5e72c7c8114-kube-api-access-s44cv" (OuterVolumeSpecName: "kube-api-access-s44cv") pod "b7d14273-442f-40b8-a209-f5e72c7c8114" (UID: "b7d14273-442f-40b8-a209-f5e72c7c8114"). InnerVolumeSpecName "kube-api-access-s44cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.698389 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7d14273-442f-40b8-a209-f5e72c7c8114" (UID: "b7d14273-442f-40b8-a209-f5e72c7c8114"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.774347 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.774387 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d14273-442f-40b8-a209-f5e72c7c8114-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.774402 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s44cv\" (UniqueName: \"kubernetes.io/projected/b7d14273-442f-40b8-a209-f5e72c7c8114-kube-api-access-s44cv\") on node \"crc\" DevicePath \"\"" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.866480 4770 generic.go:334] "Generic (PLEG): container finished" podID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerID="d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2" exitCode=0 Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.866538 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jqgq" event={"ID":"b7d14273-442f-40b8-a209-f5e72c7c8114","Type":"ContainerDied","Data":"d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2"} Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.866574 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jqgq" event={"ID":"b7d14273-442f-40b8-a209-f5e72c7c8114","Type":"ContainerDied","Data":"b50a4d0a1ddfbaa94cfd867154cc86e28519de2ead4b92fabaed06b615aa5aed"} Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.866591 4770 scope.go:117] "RemoveContainer" containerID="d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.866685 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jqgq" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.869623 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" event={"ID":"35cc164c-de5c-43cd-8a8d-38684a35227e","Type":"ContainerStarted","Data":"f56b9149c230265dad0754afbfce562c118d0c0007525058892354f673f4cc0c"} Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.870041 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.871607 4770 generic.go:334] "Generic (PLEG): container finished" podID="0355fd4f-c3df-4794-a794-567f018e52fa" containerID="5870a5d8a6a42262d1eb73188bcbb294c09b93ce3d1b2877808093048809c77b" exitCode=0 Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.871800 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerDied","Data":"5870a5d8a6a42262d1eb73188bcbb294c09b93ce3d1b2877808093048809c77b"} Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.892292 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" podStartSLOduration=2.509583419 podStartE2EDuration="8.892266336s" podCreationTimestamp="2025-12-09 14:40:05 +0000 UTC" firstStartedPulling="2025-12-09 14:40:07.004196441 +0000 UTC m=+1038.900398587" lastFinishedPulling="2025-12-09 14:40:13.386879368 +0000 UTC m=+1045.283081504" observedRunningTime="2025-12-09 14:40:13.886954217 +0000 UTC m=+1045.783156353" watchObservedRunningTime="2025-12-09 14:40:13.892266336 +0000 UTC m=+1045.788468482" Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.928643 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jqgq"] Dec 09 14:40:13 crc kubenswrapper[4770]: I1209 14:40:13.933158 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jqgq"] Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.527351 4770 scope.go:117] "RemoveContainer" containerID="ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e" Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.548985 4770 scope.go:117] "RemoveContainer" containerID="168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00" Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.568133 4770 scope.go:117] "RemoveContainer" containerID="d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2" Dec 09 14:40:14 crc kubenswrapper[4770]: E1209 14:40:14.568560 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2\": container with ID starting with d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2 not found: ID does not exist" containerID="d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2" Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.568590 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2"} err="failed to get container status \"d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2\": rpc error: code = NotFound desc = could not find container \"d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2\": container with ID starting with d16d1c5da1db11d5890e142f51588b192e913492503f4b65c74e9d20354beca2 not found: ID does not exist" Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.568612 4770 scope.go:117] "RemoveContainer" containerID="ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e" Dec 09 14:40:14 crc kubenswrapper[4770]: E1209 14:40:14.569086 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e\": container with ID starting with ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e not found: ID does not exist" containerID="ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e" Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.569108 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e"} err="failed to get container status \"ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e\": rpc error: code = NotFound desc = could not find container \"ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e\": container with ID starting with ecbc02a70f5f224e3a26c41defa3d849f35b9df28bd3f9c3464e4bd70e52782e not found: ID does not exist" Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.569121 4770 scope.go:117] "RemoveContainer" containerID="168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00" Dec 09 14:40:14 crc kubenswrapper[4770]: E1209 14:40:14.569395 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00\": container with ID starting with 168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00 not found: ID does not exist" containerID="168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00" Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.569447 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00"} err="failed to get container status \"168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00\": rpc error: code = NotFound desc = could not find container \"168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00\": container with ID starting with 168b4cc67ac6abfa66dcd5c7275648589248d9e902deb941ce00a4af086cfc00 not found: ID does not exist" Dec 09 14:40:14 crc kubenswrapper[4770]: I1209 14:40:14.602940 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" path="/var/lib/kubelet/pods/b7d14273-442f-40b8-a209-f5e72c7c8114/volumes" Dec 09 14:40:15 crc kubenswrapper[4770]: I1209 14:40:15.893370 4770 generic.go:334] "Generic (PLEG): container finished" podID="0355fd4f-c3df-4794-a794-567f018e52fa" containerID="2c0fe27750a9ec8c0ee41a53ef04b2b755afca5d32dedd72e7689c17dbcc9147" exitCode=0 Dec 09 14:40:15 crc kubenswrapper[4770]: I1209 14:40:15.893472 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerDied","Data":"2c0fe27750a9ec8c0ee41a53ef04b2b755afca5d32dedd72e7689c17dbcc9147"} Dec 09 14:40:16 crc kubenswrapper[4770]: I1209 14:40:16.904450 4770 generic.go:334] "Generic (PLEG): container finished" podID="0355fd4f-c3df-4794-a794-567f018e52fa" containerID="81d534f134fb3c10e37b087335c7f1b28d3b738495c4f2f4a5e31948984dc04b" exitCode=0 Dec 09 14:40:16 crc kubenswrapper[4770]: I1209 14:40:16.904524 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerDied","Data":"81d534f134fb3c10e37b087335c7f1b28d3b738495c4f2f4a5e31948984dc04b"} Dec 09 14:40:17 crc kubenswrapper[4770]: I1209 14:40:17.912972 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerStarted","Data":"031738983c8fba2e73cf3d37a9ae5c0c5b68f6249eb2e594e1a9fd37c6899386"} Dec 09 14:40:17 crc kubenswrapper[4770]: I1209 14:40:17.913331 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerStarted","Data":"44c307fcc46c77b4b7d02d07db1a538f0250ec094d3f1d5043fcb9e27eb3ae72"} Dec 09 14:40:18 crc kubenswrapper[4770]: I1209 14:40:18.925587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerStarted","Data":"6d2a7530dfdacf1b93a0cbc2ad0714b8bc2a7a564545e5f5a4dad3321dcaad6d"} Dec 09 14:40:19 crc kubenswrapper[4770]: I1209 14:40:19.622164 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-b524m" Dec 09 14:40:20 crc kubenswrapper[4770]: I1209 14:40:20.951929 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerStarted","Data":"8b3d56483cfdd93a6b3b65b4b2b0d0900024315fbd876ef830dda08cdfb4dd31"} Dec 09 14:40:20 crc kubenswrapper[4770]: I1209 14:40:20.952286 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerStarted","Data":"42592f6894be51628bd2558d90e2e52b688ae4cccd44368c3ac2476b2170f85d"} Dec 09 14:40:21 crc kubenswrapper[4770]: I1209 14:40:21.960931 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9c7pl" event={"ID":"0355fd4f-c3df-4794-a794-567f018e52fa","Type":"ContainerStarted","Data":"63287516f5b1ffafa0b9c72de2d5bef85acd281b6ad66076574dac73c85fe145"} Dec 09 14:40:21 crc kubenswrapper[4770]: I1209 14:40:21.961142 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:21 crc kubenswrapper[4770]: I1209 14:40:21.982930 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9c7pl" podStartSLOduration=10.313987258 podStartE2EDuration="16.982911422s" podCreationTimestamp="2025-12-09 14:40:05 +0000 UTC" firstStartedPulling="2025-12-09 14:40:06.710906758 +0000 UTC m=+1038.607108904" lastFinishedPulling="2025-12-09 14:40:13.379830932 +0000 UTC m=+1045.276033068" observedRunningTime="2025-12-09 14:40:21.981543553 +0000 UTC m=+1053.877745689" watchObservedRunningTime="2025-12-09 14:40:21.982911422 +0000 UTC m=+1053.879113578" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.652332 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-c7wjv"] Dec 09 14:40:22 crc kubenswrapper[4770]: E1209 14:40:22.652997 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerName="extract-content" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.653023 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerName="extract-content" Dec 09 14:40:22 crc kubenswrapper[4770]: E1209 14:40:22.653040 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerName="registry-server" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.653049 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerName="registry-server" Dec 09 14:40:22 crc kubenswrapper[4770]: E1209 14:40:22.653072 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerName="extract-utilities" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.653080 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerName="extract-utilities" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.653236 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d14273-442f-40b8-a209-f5e72c7c8114" containerName="registry-server" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.653793 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c7wjv" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.661361 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.661621 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-sm7vb" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.661774 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.664985 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-c7wjv"] Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.745927 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mxn\" (UniqueName: \"kubernetes.io/projected/d1a43785-f3b9-4c84-abcb-8eb93914fd13-kube-api-access-f2mxn\") pod \"openstack-operator-index-c7wjv\" (UID: \"d1a43785-f3b9-4c84-abcb-8eb93914fd13\") " pod="openstack-operators/openstack-operator-index-c7wjv" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.847004 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mxn\" (UniqueName: \"kubernetes.io/projected/d1a43785-f3b9-4c84-abcb-8eb93914fd13-kube-api-access-f2mxn\") pod \"openstack-operator-index-c7wjv\" (UID: \"d1a43785-f3b9-4c84-abcb-8eb93914fd13\") " pod="openstack-operators/openstack-operator-index-c7wjv" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.864619 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mxn\" (UniqueName: \"kubernetes.io/projected/d1a43785-f3b9-4c84-abcb-8eb93914fd13-kube-api-access-f2mxn\") pod \"openstack-operator-index-c7wjv\" (UID: \"d1a43785-f3b9-4c84-abcb-8eb93914fd13\") " pod="openstack-operators/openstack-operator-index-c7wjv" Dec 09 14:40:22 crc kubenswrapper[4770]: I1209 14:40:22.977276 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c7wjv" Dec 09 14:40:23 crc kubenswrapper[4770]: I1209 14:40:23.414405 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-c7wjv"] Dec 09 14:40:23 crc kubenswrapper[4770]: W1209 14:40:23.417936 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1a43785_f3b9_4c84_abcb_8eb93914fd13.slice/crio-a8c1510a3fc2cf49f0e120030da50c90a1da64b5f0b4c0990bfa55f4df07fef6 WatchSource:0}: Error finding container a8c1510a3fc2cf49f0e120030da50c90a1da64b5f0b4c0990bfa55f4df07fef6: Status 404 returned error can't find the container with id a8c1510a3fc2cf49f0e120030da50c90a1da64b5f0b4c0990bfa55f4df07fef6 Dec 09 14:40:23 crc kubenswrapper[4770]: I1209 14:40:23.980544 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c7wjv" event={"ID":"d1a43785-f3b9-4c84-abcb-8eb93914fd13","Type":"ContainerStarted","Data":"a8c1510a3fc2cf49f0e120030da50c90a1da64b5f0b4c0990bfa55f4df07fef6"} Dec 09 14:40:25 crc kubenswrapper[4770]: I1209 14:40:25.616249 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-c7wjv"] Dec 09 14:40:25 crc kubenswrapper[4770]: I1209 14:40:25.905179 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:25 crc kubenswrapper[4770]: I1209 14:40:25.950874 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:25 crc kubenswrapper[4770]: I1209 14:40:25.997814 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-c7wjv" podUID="d1a43785-f3b9-4c84-abcb-8eb93914fd13" containerName="registry-server" containerID="cri-o://b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce" gracePeriod=2 Dec 09 14:40:25 crc kubenswrapper[4770]: I1209 14:40:25.997993 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c7wjv" event={"ID":"d1a43785-f3b9-4c84-abcb-8eb93914fd13","Type":"ContainerStarted","Data":"b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce"} Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.222272 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-c7wjv" podStartSLOduration=1.8279590190000001 podStartE2EDuration="4.222250729s" podCreationTimestamp="2025-12-09 14:40:22 +0000 UTC" firstStartedPulling="2025-12-09 14:40:23.420417166 +0000 UTC m=+1055.316619302" lastFinishedPulling="2025-12-09 14:40:25.814708876 +0000 UTC m=+1057.710911012" observedRunningTime="2025-12-09 14:40:26.036128201 +0000 UTC m=+1057.932330347" watchObservedRunningTime="2025-12-09 14:40:26.222250729 +0000 UTC m=+1058.118452865" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.227523 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8npb8"] Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.232747 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.234273 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8npb8"] Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.350593 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c7wjv" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.395059 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2g9g\" (UniqueName: \"kubernetes.io/projected/f296b8f1-163d-4831-add9-bc8b63e3bf77-kube-api-access-g2g9g\") pod \"openstack-operator-index-8npb8\" (UID: \"f296b8f1-163d-4831-add9-bc8b63e3bf77\") " pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.495537 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2mxn\" (UniqueName: \"kubernetes.io/projected/d1a43785-f3b9-4c84-abcb-8eb93914fd13-kube-api-access-f2mxn\") pod \"d1a43785-f3b9-4c84-abcb-8eb93914fd13\" (UID: \"d1a43785-f3b9-4c84-abcb-8eb93914fd13\") " Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.495882 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2g9g\" (UniqueName: \"kubernetes.io/projected/f296b8f1-163d-4831-add9-bc8b63e3bf77-kube-api-access-g2g9g\") pod \"openstack-operator-index-8npb8\" (UID: \"f296b8f1-163d-4831-add9-bc8b63e3bf77\") " pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.510047 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a43785-f3b9-4c84-abcb-8eb93914fd13-kube-api-access-f2mxn" (OuterVolumeSpecName: "kube-api-access-f2mxn") pod "d1a43785-f3b9-4c84-abcb-8eb93914fd13" (UID: "d1a43785-f3b9-4c84-abcb-8eb93914fd13"). InnerVolumeSpecName "kube-api-access-f2mxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.514671 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2g9g\" (UniqueName: \"kubernetes.io/projected/f296b8f1-163d-4831-add9-bc8b63e3bf77-kube-api-access-g2g9g\") pod \"openstack-operator-index-8npb8\" (UID: \"f296b8f1-163d-4831-add9-bc8b63e3bf77\") " pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.534983 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4tn79" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.559072 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.597225 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2mxn\" (UniqueName: \"kubernetes.io/projected/d1a43785-f3b9-4c84-abcb-8eb93914fd13-kube-api-access-f2mxn\") on node \"crc\" DevicePath \"\"" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.629387 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-2qk72" Dec 09 14:40:26 crc kubenswrapper[4770]: I1209 14:40:26.993911 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8npb8"] Dec 09 14:40:26 crc kubenswrapper[4770]: W1209 14:40:26.996144 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf296b8f1_163d_4831_add9_bc8b63e3bf77.slice/crio-e4c9876bbae05c1756ec427a07af291536fbb5ef9e4e72eaa3faf64db8a87b35 WatchSource:0}: Error finding container e4c9876bbae05c1756ec427a07af291536fbb5ef9e4e72eaa3faf64db8a87b35: Status 404 returned error can't find the container with id e4c9876bbae05c1756ec427a07af291536fbb5ef9e4e72eaa3faf64db8a87b35 Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.003829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8npb8" event={"ID":"f296b8f1-163d-4831-add9-bc8b63e3bf77","Type":"ContainerStarted","Data":"e4c9876bbae05c1756ec427a07af291536fbb5ef9e4e72eaa3faf64db8a87b35"} Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.005432 4770 generic.go:334] "Generic (PLEG): container finished" podID="d1a43785-f3b9-4c84-abcb-8eb93914fd13" containerID="b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce" exitCode=0 Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.005460 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c7wjv" event={"ID":"d1a43785-f3b9-4c84-abcb-8eb93914fd13","Type":"ContainerDied","Data":"b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce"} Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.005476 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c7wjv" event={"ID":"d1a43785-f3b9-4c84-abcb-8eb93914fd13","Type":"ContainerDied","Data":"a8c1510a3fc2cf49f0e120030da50c90a1da64b5f0b4c0990bfa55f4df07fef6"} Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.005491 4770 scope.go:117] "RemoveContainer" containerID="b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce" Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.005572 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c7wjv" Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.021661 4770 scope.go:117] "RemoveContainer" containerID="b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce" Dec 09 14:40:27 crc kubenswrapper[4770]: E1209 14:40:27.022260 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce\": container with ID starting with b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce not found: ID does not exist" containerID="b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce" Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.022304 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce"} err="failed to get container status \"b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce\": rpc error: code = NotFound desc = could not find container \"b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce\": container with ID starting with b2e69b8743273ded830939c5055bd06c52a8a178a99f9e67ea082ab60da3e4ce not found: ID does not exist" Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.029489 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-c7wjv"] Dec 09 14:40:27 crc kubenswrapper[4770]: I1209 14:40:27.036282 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-c7wjv"] Dec 09 14:40:28 crc kubenswrapper[4770]: I1209 14:40:28.599342 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a43785-f3b9-4c84-abcb-8eb93914fd13" path="/var/lib/kubelet/pods/d1a43785-f3b9-4c84-abcb-8eb93914fd13/volumes" Dec 09 14:40:29 crc kubenswrapper[4770]: I1209 14:40:29.022669 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8npb8" event={"ID":"f296b8f1-163d-4831-add9-bc8b63e3bf77","Type":"ContainerStarted","Data":"333a85e7a4d573f5f5f0c45ea88127c1f4caa08041abd431ba241feb704f59b0"} Dec 09 14:40:29 crc kubenswrapper[4770]: I1209 14:40:29.042575 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8npb8" podStartSLOduration=1.9759839559999999 podStartE2EDuration="3.042553969s" podCreationTimestamp="2025-12-09 14:40:26 +0000 UTC" firstStartedPulling="2025-12-09 14:40:27.000383055 +0000 UTC m=+1058.896585201" lastFinishedPulling="2025-12-09 14:40:28.066953038 +0000 UTC m=+1059.963155214" observedRunningTime="2025-12-09 14:40:29.039464463 +0000 UTC m=+1060.935666609" watchObservedRunningTime="2025-12-09 14:40:29.042553969 +0000 UTC m=+1060.938756105" Dec 09 14:40:35 crc kubenswrapper[4770]: I1209 14:40:35.910789 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9c7pl" Dec 09 14:40:36 crc kubenswrapper[4770]: I1209 14:40:36.560285 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:36 crc kubenswrapper[4770]: I1209 14:40:36.560761 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:36 crc kubenswrapper[4770]: I1209 14:40:36.584105 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:37 crc kubenswrapper[4770]: I1209 14:40:37.115597 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8npb8" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.472335 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv"] Dec 09 14:40:44 crc kubenswrapper[4770]: E1209 14:40:44.473156 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a43785-f3b9-4c84-abcb-8eb93914fd13" containerName="registry-server" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.473171 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a43785-f3b9-4c84-abcb-8eb93914fd13" containerName="registry-server" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.473325 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a43785-f3b9-4c84-abcb-8eb93914fd13" containerName="registry-server" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.474364 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.480523 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv"] Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.481034 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gvzjl" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.577288 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-bundle\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.577378 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-util\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.577608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dn7f\" (UniqueName: \"kubernetes.io/projected/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-kube-api-access-5dn7f\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.678456 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-util\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.678534 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dn7f\" (UniqueName: \"kubernetes.io/projected/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-kube-api-access-5dn7f\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.678570 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-bundle\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.678927 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-util\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.678985 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-bundle\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.699342 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dn7f\" (UniqueName: \"kubernetes.io/projected/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-kube-api-access-5dn7f\") pod \"65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:44 crc kubenswrapper[4770]: I1209 14:40:44.791427 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:45 crc kubenswrapper[4770]: I1209 14:40:45.102542 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv"] Dec 09 14:40:46 crc kubenswrapper[4770]: I1209 14:40:46.349091 4770 generic.go:334] "Generic (PLEG): container finished" podID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerID="12db0782436e0dbadced1ffdeae0ccc7ba2d65c13c15b8d788698352df17d8ad" exitCode=0 Dec 09 14:40:46 crc kubenswrapper[4770]: I1209 14:40:46.349202 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" event={"ID":"7cb6ff7a-fa85-4173-b3dd-333fe01ec347","Type":"ContainerDied","Data":"12db0782436e0dbadced1ffdeae0ccc7ba2d65c13c15b8d788698352df17d8ad"} Dec 09 14:40:46 crc kubenswrapper[4770]: I1209 14:40:46.349414 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" event={"ID":"7cb6ff7a-fa85-4173-b3dd-333fe01ec347","Type":"ContainerStarted","Data":"40814e6018c55490e2920c90f4f5d52ba01612df3e73d9f1d4b41d23f580e749"} Dec 09 14:40:46 crc kubenswrapper[4770]: I1209 14:40:46.351145 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:40:48 crc kubenswrapper[4770]: I1209 14:40:48.369897 4770 generic.go:334] "Generic (PLEG): container finished" podID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerID="9ac77d51ca917deb12e369e1edb8f030e1828561fa4ac83370a061b1033cd6fa" exitCode=0 Dec 09 14:40:48 crc kubenswrapper[4770]: I1209 14:40:48.369968 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" event={"ID":"7cb6ff7a-fa85-4173-b3dd-333fe01ec347","Type":"ContainerDied","Data":"9ac77d51ca917deb12e369e1edb8f030e1828561fa4ac83370a061b1033cd6fa"} Dec 09 14:40:49 crc kubenswrapper[4770]: I1209 14:40:49.378701 4770 generic.go:334] "Generic (PLEG): container finished" podID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerID="b5bd47ca506d66ec8228f219bdaf988171b80d4a1a82191d8c6b05ee45e98a75" exitCode=0 Dec 09 14:40:49 crc kubenswrapper[4770]: I1209 14:40:49.378773 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" event={"ID":"7cb6ff7a-fa85-4173-b3dd-333fe01ec347","Type":"ContainerDied","Data":"b5bd47ca506d66ec8228f219bdaf988171b80d4a1a82191d8c6b05ee45e98a75"} Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.657689 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.723344 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dn7f\" (UniqueName: \"kubernetes.io/projected/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-kube-api-access-5dn7f\") pod \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.723468 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-bundle\") pod \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.723612 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-util\") pod \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\" (UID: \"7cb6ff7a-fa85-4173-b3dd-333fe01ec347\") " Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.724231 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-bundle" (OuterVolumeSpecName: "bundle") pod "7cb6ff7a-fa85-4173-b3dd-333fe01ec347" (UID: "7cb6ff7a-fa85-4173-b3dd-333fe01ec347"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.731360 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-kube-api-access-5dn7f" (OuterVolumeSpecName: "kube-api-access-5dn7f") pod "7cb6ff7a-fa85-4173-b3dd-333fe01ec347" (UID: "7cb6ff7a-fa85-4173-b3dd-333fe01ec347"). InnerVolumeSpecName "kube-api-access-5dn7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.737047 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-util" (OuterVolumeSpecName: "util") pod "7cb6ff7a-fa85-4173-b3dd-333fe01ec347" (UID: "7cb6ff7a-fa85-4173-b3dd-333fe01ec347"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.825124 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dn7f\" (UniqueName: \"kubernetes.io/projected/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-kube-api-access-5dn7f\") on node \"crc\" DevicePath \"\"" Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.825161 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:40:50 crc kubenswrapper[4770]: I1209 14:40:50.825171 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cb6ff7a-fa85-4173-b3dd-333fe01ec347-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:40:51 crc kubenswrapper[4770]: I1209 14:40:51.396527 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" event={"ID":"7cb6ff7a-fa85-4173-b3dd-333fe01ec347","Type":"ContainerDied","Data":"40814e6018c55490e2920c90f4f5d52ba01612df3e73d9f1d4b41d23f580e749"} Dec 09 14:40:51 crc kubenswrapper[4770]: I1209 14:40:51.396573 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv" Dec 09 14:40:51 crc kubenswrapper[4770]: I1209 14:40:51.396586 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40814e6018c55490e2920c90f4f5d52ba01612df3e73d9f1d4b41d23f580e749" Dec 09 14:40:53 crc kubenswrapper[4770]: E1209 14:40:53.109545 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb6ff7a_fa85_4173_b3dd_333fe01ec347.slice/crio-conmon-b5bd47ca506d66ec8228f219bdaf988171b80d4a1a82191d8c6b05ee45e98a75.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.570092 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8"] Dec 09 14:41:00 crc kubenswrapper[4770]: E1209 14:41:00.571022 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerName="extract" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.571042 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerName="extract" Dec 09 14:41:00 crc kubenswrapper[4770]: E1209 14:41:00.571061 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerName="util" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.571072 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerName="util" Dec 09 14:41:00 crc kubenswrapper[4770]: E1209 14:41:00.571101 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerName="pull" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.571111 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerName="pull" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.571290 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb6ff7a-fa85-4173-b3dd-333fe01ec347" containerName="extract" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.571857 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.581866 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-2zxqj" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.615708 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8"] Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.762889 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kslgg\" (UniqueName: \"kubernetes.io/projected/02f8a1b7-96a5-4f27-865b-941490944ff6-kube-api-access-kslgg\") pod \"openstack-operator-controller-operator-574d99c486-2blm8\" (UID: \"02f8a1b7-96a5-4f27-865b-941490944ff6\") " pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.864280 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kslgg\" (UniqueName: \"kubernetes.io/projected/02f8a1b7-96a5-4f27-865b-941490944ff6-kube-api-access-kslgg\") pod \"openstack-operator-controller-operator-574d99c486-2blm8\" (UID: \"02f8a1b7-96a5-4f27-865b-941490944ff6\") " pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.882852 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kslgg\" (UniqueName: \"kubernetes.io/projected/02f8a1b7-96a5-4f27-865b-941490944ff6-kube-api-access-kslgg\") pod \"openstack-operator-controller-operator-574d99c486-2blm8\" (UID: \"02f8a1b7-96a5-4f27-865b-941490944ff6\") " pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" Dec 09 14:41:00 crc kubenswrapper[4770]: I1209 14:41:00.900586 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" Dec 09 14:41:01 crc kubenswrapper[4770]: I1209 14:41:01.322694 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8"] Dec 09 14:41:01 crc kubenswrapper[4770]: I1209 14:41:01.459508 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" event={"ID":"02f8a1b7-96a5-4f27-865b-941490944ff6","Type":"ContainerStarted","Data":"446b34bdedb8fdf28d3f6cfb2e4c72ba539604f46bbfef7fa7c255ff29251445"} Dec 09 14:41:03 crc kubenswrapper[4770]: E1209 14:41:03.256438 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb6ff7a_fa85_4173_b3dd_333fe01ec347.slice/crio-conmon-b5bd47ca506d66ec8228f219bdaf988171b80d4a1a82191d8c6b05ee45e98a75.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:41:07 crc kubenswrapper[4770]: I1209 14:41:07.555076 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" event={"ID":"02f8a1b7-96a5-4f27-865b-941490944ff6","Type":"ContainerStarted","Data":"2e5ba9340fee4cd8dd91d252c6f3e5af80d84c650901ab83ebe7bd7193f01556"} Dec 09 14:41:07 crc kubenswrapper[4770]: I1209 14:41:07.556421 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" Dec 09 14:41:07 crc kubenswrapper[4770]: I1209 14:41:07.590343 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" podStartSLOduration=2.143102205 podStartE2EDuration="7.590326562s" podCreationTimestamp="2025-12-09 14:41:00 +0000 UTC" firstStartedPulling="2025-12-09 14:41:01.320760515 +0000 UTC m=+1093.216962641" lastFinishedPulling="2025-12-09 14:41:06.767984832 +0000 UTC m=+1098.664186998" observedRunningTime="2025-12-09 14:41:07.585287831 +0000 UTC m=+1099.481489967" watchObservedRunningTime="2025-12-09 14:41:07.590326562 +0000 UTC m=+1099.486528698" Dec 09 14:41:13 crc kubenswrapper[4770]: E1209 14:41:13.401198 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb6ff7a_fa85_4173_b3dd_333fe01ec347.slice/crio-conmon-b5bd47ca506d66ec8228f219bdaf988171b80d4a1a82191d8c6b05ee45e98a75.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:41:20 crc kubenswrapper[4770]: I1209 14:41:20.903678 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-574d99c486-2blm8" Dec 09 14:41:23 crc kubenswrapper[4770]: E1209 14:41:23.586296 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb6ff7a_fa85_4173_b3dd_333fe01ec347.slice/crio-conmon-b5bd47ca506d66ec8228f219bdaf988171b80d4a1a82191d8c6b05ee45e98a75.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:41:33 crc kubenswrapper[4770]: E1209 14:41:33.738482 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb6ff7a_fa85_4173_b3dd_333fe01ec347.slice/crio-conmon-b5bd47ca506d66ec8228f219bdaf988171b80d4a1a82191d8c6b05ee45e98a75.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:41:43 crc kubenswrapper[4770]: E1209 14:41:43.976591 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb6ff7a_fa85_4173_b3dd_333fe01ec347.slice/crio-conmon-b5bd47ca506d66ec8228f219bdaf988171b80d4a1a82191d8c6b05ee45e98a75.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.240763 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.242482 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.245307 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-swbpv" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.245923 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.247446 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.254249 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xjs2r" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.267071 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.273802 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.289848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vgr\" (UniqueName: \"kubernetes.io/projected/fadaf4e5-6b66-4dc3-b51e-e4700db03792-kube-api-access-n7vgr\") pod \"cinder-operator-controller-manager-6c677c69b-twpm6\" (UID: \"fadaf4e5-6b66-4dc3-b51e-e4700db03792\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.289903 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8td\" (UniqueName: \"kubernetes.io/projected/218b9184-d581-40d1-bc52-734507d47b65-kube-api-access-5r8td\") pod \"barbican-operator-controller-manager-7d9dfd778-68275\" (UID: \"218b9184-d581-40d1-bc52-734507d47b65\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.294945 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.296350 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.305352 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.306053 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qdrw9" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.306429 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.311033 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tmqvg" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.314341 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.315398 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.319315 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tqjnk" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.329420 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.359792 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.370446 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.390848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7p4\" (UniqueName: \"kubernetes.io/projected/5b04f571-f2cc-4486-91c9-d6f9f710f7fd-kube-api-access-gx7p4\") pod \"glance-operator-controller-manager-5697bb5779-zx7w6\" (UID: \"5b04f571-f2cc-4486-91c9-d6f9f710f7fd\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.390898 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8td\" (UniqueName: \"kubernetes.io/projected/218b9184-d581-40d1-bc52-734507d47b65-kube-api-access-5r8td\") pod \"barbican-operator-controller-manager-7d9dfd778-68275\" (UID: \"218b9184-d581-40d1-bc52-734507d47b65\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.390943 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkbf\" (UniqueName: \"kubernetes.io/projected/e3ac9095-4890-4130-b7f2-00c7927f6890-kube-api-access-4hkbf\") pod \"heat-operator-controller-manager-5f64f6f8bb-6k7ff\" (UID: \"e3ac9095-4890-4130-b7f2-00c7927f6890\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.390986 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vgr\" (UniqueName: \"kubernetes.io/projected/fadaf4e5-6b66-4dc3-b51e-e4700db03792-kube-api-access-n7vgr\") pod \"cinder-operator-controller-manager-6c677c69b-twpm6\" (UID: \"fadaf4e5-6b66-4dc3-b51e-e4700db03792\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.391007 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9nd5\" (UniqueName: \"kubernetes.io/projected/706fc0bc-1168-477f-b4ce-b30ea5b70bcf-kube-api-access-b9nd5\") pod \"designate-operator-controller-manager-697fb699cf-ljx9f\" (UID: \"706fc0bc-1168-477f-b4ce-b30ea5b70bcf\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.405194 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.406291 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.414221 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.415585 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.419874 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jnn8r" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.420224 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.420231 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8nkwn" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.426457 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.428183 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.435866 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-hj8nd" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.436451 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8td\" (UniqueName: \"kubernetes.io/projected/218b9184-d581-40d1-bc52-734507d47b65-kube-api-access-5r8td\") pod \"barbican-operator-controller-manager-7d9dfd778-68275\" (UID: \"218b9184-d581-40d1-bc52-734507d47b65\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.444591 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vgr\" (UniqueName: \"kubernetes.io/projected/fadaf4e5-6b66-4dc3-b51e-e4700db03792-kube-api-access-n7vgr\") pod \"cinder-operator-controller-manager-6c677c69b-twpm6\" (UID: \"fadaf4e5-6b66-4dc3-b51e-e4700db03792\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.460852 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.462204 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.465374 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mx4wh" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.481482 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.492462 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hkbf\" (UniqueName: \"kubernetes.io/projected/e3ac9095-4890-4130-b7f2-00c7927f6890-kube-api-access-4hkbf\") pod \"heat-operator-controller-manager-5f64f6f8bb-6k7ff\" (UID: \"e3ac9095-4890-4130-b7f2-00c7927f6890\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.492971 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nd5\" (UniqueName: \"kubernetes.io/projected/706fc0bc-1168-477f-b4ce-b30ea5b70bcf-kube-api-access-b9nd5\") pod \"designate-operator-controller-manager-697fb699cf-ljx9f\" (UID: \"706fc0bc-1168-477f-b4ce-b30ea5b70bcf\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.493041 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7p4\" (UniqueName: \"kubernetes.io/projected/5b04f571-f2cc-4486-91c9-d6f9f710f7fd-kube-api-access-gx7p4\") pod \"glance-operator-controller-manager-5697bb5779-zx7w6\" (UID: \"5b04f571-f2cc-4486-91c9-d6f9f710f7fd\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.500531 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.505535 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.522021 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.548983 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qnfqk" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.551835 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7p4\" (UniqueName: \"kubernetes.io/projected/5b04f571-f2cc-4486-91c9-d6f9f710f7fd-kube-api-access-gx7p4\") pod \"glance-operator-controller-manager-5697bb5779-zx7w6\" (UID: \"5b04f571-f2cc-4486-91c9-d6f9f710f7fd\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.560638 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hkbf\" (UniqueName: \"kubernetes.io/projected/e3ac9095-4890-4130-b7f2-00c7927f6890-kube-api-access-4hkbf\") pod \"heat-operator-controller-manager-5f64f6f8bb-6k7ff\" (UID: \"e3ac9095-4890-4130-b7f2-00c7927f6890\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.568032 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.583766 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.584402 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.584414 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nd5\" (UniqueName: \"kubernetes.io/projected/706fc0bc-1168-477f-b4ce-b30ea5b70bcf-kube-api-access-b9nd5\") pod \"designate-operator-controller-manager-697fb699cf-ljx9f\" (UID: \"706fc0bc-1168-477f-b4ce-b30ea5b70bcf\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.595153 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c78w\" (UniqueName: \"kubernetes.io/projected/6c377cc4-9030-4cc7-96b9-68d9634e24da-kube-api-access-2c78w\") pod \"ironic-operator-controller-manager-967d97867-r2rm9\" (UID: \"6c377cc4-9030-4cc7-96b9-68d9634e24da\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.595215 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.595260 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84j9l\" (UniqueName: \"kubernetes.io/projected/5dcc1318-bc40-49c5-b6e1-718c06af70f3-kube-api-access-84j9l\") pod \"keystone-operator-controller-manager-7765d96ddf-9ff9c\" (UID: \"5dcc1318-bc40-49c5-b6e1-718c06af70f3\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.595294 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffsdj\" (UniqueName: \"kubernetes.io/projected/60c59cb7-e525-45d2-a544-4b9a2dc6bbab-kube-api-access-ffsdj\") pod \"horizon-operator-controller-manager-68c6d99b8f-52xhp\" (UID: \"60c59cb7-e525-45d2-a544-4b9a2dc6bbab\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.595371 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-267zn\" (UniqueName: \"kubernetes.io/projected/01836e5a-2708-4b73-b24d-79f804c8e0ef-kube-api-access-267zn\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.597782 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.602925 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.651340 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.657289 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.683712 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.683824 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.689200 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.697447 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf6v2\" (UniqueName: \"kubernetes.io/projected/b3f7f32e-9cb9-42e8-8aca-86bb7b16479d-kube-api-access-tf6v2\") pod \"manila-operator-controller-manager-5b5fd79c9c-fjnk7\" (UID: \"b3f7f32e-9cb9-42e8-8aca-86bb7b16479d\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.697570 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c78w\" (UniqueName: \"kubernetes.io/projected/6c377cc4-9030-4cc7-96b9-68d9634e24da-kube-api-access-2c78w\") pod \"ironic-operator-controller-manager-967d97867-r2rm9\" (UID: \"6c377cc4-9030-4cc7-96b9-68d9634e24da\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.697601 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.697629 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84j9l\" (UniqueName: \"kubernetes.io/projected/5dcc1318-bc40-49c5-b6e1-718c06af70f3-kube-api-access-84j9l\") pod \"keystone-operator-controller-manager-7765d96ddf-9ff9c\" (UID: \"5dcc1318-bc40-49c5-b6e1-718c06af70f3\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.697779 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffsdj\" (UniqueName: \"kubernetes.io/projected/60c59cb7-e525-45d2-a544-4b9a2dc6bbab-kube-api-access-ffsdj\") pod \"horizon-operator-controller-manager-68c6d99b8f-52xhp\" (UID: \"60c59cb7-e525-45d2-a544-4b9a2dc6bbab\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.697849 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-267zn\" (UniqueName: \"kubernetes.io/projected/01836e5a-2708-4b73-b24d-79f804c8e0ef-kube-api-access-267zn\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:45 crc kubenswrapper[4770]: E1209 14:41:45.699582 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:45 crc kubenswrapper[4770]: E1209 14:41:45.699639 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert podName:01836e5a-2708-4b73-b24d-79f804c8e0ef nodeName:}" failed. No retries permitted until 2025-12-09 14:41:46.199621876 +0000 UTC m=+1138.095824002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert") pod "infra-operator-controller-manager-78d48bff9d-55h7r" (UID: "01836e5a-2708-4b73-b24d-79f804c8e0ef") : secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.700933 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rlmpk" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.737450 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffsdj\" (UniqueName: \"kubernetes.io/projected/60c59cb7-e525-45d2-a544-4b9a2dc6bbab-kube-api-access-ffsdj\") pod \"horizon-operator-controller-manager-68c6d99b8f-52xhp\" (UID: \"60c59cb7-e525-45d2-a544-4b9a2dc6bbab\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.737508 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.737587 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84j9l\" (UniqueName: \"kubernetes.io/projected/5dcc1318-bc40-49c5-b6e1-718c06af70f3-kube-api-access-84j9l\") pod \"keystone-operator-controller-manager-7765d96ddf-9ff9c\" (UID: \"5dcc1318-bc40-49c5-b6e1-718c06af70f3\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.745468 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-267zn\" (UniqueName: \"kubernetes.io/projected/01836e5a-2708-4b73-b24d-79f804c8e0ef-kube-api-access-267zn\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.753177 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c78w\" (UniqueName: \"kubernetes.io/projected/6c377cc4-9030-4cc7-96b9-68d9634e24da-kube-api-access-2c78w\") pod \"ironic-operator-controller-manager-967d97867-r2rm9\" (UID: \"6c377cc4-9030-4cc7-96b9-68d9634e24da\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.764744 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.765840 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.772065 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-n28w5" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.790990 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.802974 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd4bf\" (UniqueName: \"kubernetes.io/projected/847e2267-4743-4e6d-b76f-81bb0402a8e2-kube-api-access-zd4bf\") pod \"mariadb-operator-controller-manager-79c8c4686c-6rp4l\" (UID: \"847e2267-4743-4e6d-b76f-81bb0402a8e2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.803047 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf6v2\" (UniqueName: \"kubernetes.io/projected/b3f7f32e-9cb9-42e8-8aca-86bb7b16479d-kube-api-access-tf6v2\") pod \"manila-operator-controller-manager-5b5fd79c9c-fjnk7\" (UID: \"b3f7f32e-9cb9-42e8-8aca-86bb7b16479d\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.803155 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxv4h\" (UniqueName: \"kubernetes.io/projected/a697db6f-78dd-4f87-bafb-b1ad6ddfa241-kube-api-access-mxv4h\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-m4rlv\" (UID: \"a697db6f-78dd-4f87-bafb-b1ad6ddfa241\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.819767 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf6v2\" (UniqueName: \"kubernetes.io/projected/b3f7f32e-9cb9-42e8-8aca-86bb7b16479d-kube-api-access-tf6v2\") pod \"manila-operator-controller-manager-5b5fd79c9c-fjnk7\" (UID: \"b3f7f32e-9cb9-42e8-8aca-86bb7b16479d\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.825437 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.833776 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.835074 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.835991 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.837273 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2bccv" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.840556 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.848022 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.851591 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.869200 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nsbsf" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.882200 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.904852 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m782g\" (UniqueName: \"kubernetes.io/projected/1e594101-5c21-4a4a-8027-39449d107481-kube-api-access-m782g\") pod \"nova-operator-controller-manager-697bc559fc-lvzw2\" (UID: \"1e594101-5c21-4a4a-8027-39449d107481\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.904908 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbtlb\" (UniqueName: \"kubernetes.io/projected/369cb688-d8db-443a-beef-6f0cf31b31cf-kube-api-access-dbtlb\") pod \"octavia-operator-controller-manager-998648c74-zvfn5\" (UID: \"369cb688-d8db-443a-beef-6f0cf31b31cf\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.904945 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd4bf\" (UniqueName: \"kubernetes.io/projected/847e2267-4743-4e6d-b76f-81bb0402a8e2-kube-api-access-zd4bf\") pod \"mariadb-operator-controller-manager-79c8c4686c-6rp4l\" (UID: \"847e2267-4743-4e6d-b76f-81bb0402a8e2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.905006 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxv4h\" (UniqueName: \"kubernetes.io/projected/a697db6f-78dd-4f87-bafb-b1ad6ddfa241-kube-api-access-mxv4h\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-m4rlv\" (UID: \"a697db6f-78dd-4f87-bafb-b1ad6ddfa241\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.924326 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.925775 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.929024 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.939369 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.939576 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-z5rfg" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.947214 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxv4h\" (UniqueName: \"kubernetes.io/projected/a697db6f-78dd-4f87-bafb-b1ad6ddfa241-kube-api-access-mxv4h\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-m4rlv\" (UID: \"a697db6f-78dd-4f87-bafb-b1ad6ddfa241\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.954960 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd4bf\" (UniqueName: \"kubernetes.io/projected/847e2267-4743-4e6d-b76f-81bb0402a8e2-kube-api-access-zd4bf\") pod \"mariadb-operator-controller-manager-79c8c4686c-6rp4l\" (UID: \"847e2267-4743-4e6d-b76f-81bb0402a8e2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.955600 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.956214 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.981822 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.984450 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.987705 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-gwvj5" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.989008 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-brn9f"] Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.994066 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" Dec 09 14:41:45 crc kubenswrapper[4770]: I1209 14:41:45.999583 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.006315 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m782g\" (UniqueName: \"kubernetes.io/projected/1e594101-5c21-4a4a-8027-39449d107481-kube-api-access-m782g\") pod \"nova-operator-controller-manager-697bc559fc-lvzw2\" (UID: \"1e594101-5c21-4a4a-8027-39449d107481\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.006356 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbtlb\" (UniqueName: \"kubernetes.io/projected/369cb688-d8db-443a-beef-6f0cf31b31cf-kube-api-access-dbtlb\") pod \"octavia-operator-controller-manager-998648c74-zvfn5\" (UID: \"369cb688-d8db-443a-beef-6f0cf31b31cf\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.007074 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-brn9f"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.007209 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-hqmxx" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.021456 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.022981 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.029063 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bfjqn" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.043054 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.046824 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbtlb\" (UniqueName: \"kubernetes.io/projected/369cb688-d8db-443a-beef-6f0cf31b31cf-kube-api-access-dbtlb\") pod \"octavia-operator-controller-manager-998648c74-zvfn5\" (UID: \"369cb688-d8db-443a-beef-6f0cf31b31cf\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.048965 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.058003 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-j8xrf" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.058833 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m782g\" (UniqueName: \"kubernetes.io/projected/1e594101-5c21-4a4a-8027-39449d107481-kube-api-access-m782g\") pod \"nova-operator-controller-manager-697bc559fc-lvzw2\" (UID: \"1e594101-5c21-4a4a-8027-39449d107481\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.069831 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.072278 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.104047 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.109611 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsg4t\" (UniqueName: \"kubernetes.io/projected/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-kube-api-access-jsg4t\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.109661 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8tv5\" (UniqueName: \"kubernetes.io/projected/703c76ff-d327-45f7-a9ae-2d60d7629d31-kube-api-access-d8tv5\") pod \"placement-operator-controller-manager-78f8948974-brn9f\" (UID: \"703c76ff-d327-45f7-a9ae-2d60d7629d31\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.109709 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrjs\" (UniqueName: \"kubernetes.io/projected/2df32a14-ab02-48cf-94e4-5dd7b72fdcff-kube-api-access-rrrjs\") pod \"ovn-operator-controller-manager-b6456fdb6-j48s5\" (UID: \"2df32a14-ab02-48cf-94e4-5dd7b72fdcff\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.109760 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.109788 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjq4\" (UniqueName: \"kubernetes.io/projected/37bafc12-f467-417c-b3f7-6fc18896b73f-kube-api-access-gkjq4\") pod \"telemetry-operator-controller-manager-5f89dd7bc5-2vmmm\" (UID: \"37bafc12-f467-417c-b3f7-6fc18896b73f\") " pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.109817 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22l8z\" (UniqueName: \"kubernetes.io/projected/02f392e6-53d4-4bdb-bb7c-3ff1e29266bd-kube-api-access-22l8z\") pod \"swift-operator-controller-manager-9d58d64bc-6xd5q\" (UID: \"02f392e6-53d4-4bdb-bb7c-3ff1e29266bd\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.180095 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.198952 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.211040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22l8z\" (UniqueName: \"kubernetes.io/projected/02f392e6-53d4-4bdb-bb7c-3ff1e29266bd-kube-api-access-22l8z\") pod \"swift-operator-controller-manager-9d58d64bc-6xd5q\" (UID: \"02f392e6-53d4-4bdb-bb7c-3ff1e29266bd\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.211145 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsg4t\" (UniqueName: \"kubernetes.io/projected/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-kube-api-access-jsg4t\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.211201 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8tv5\" (UniqueName: \"kubernetes.io/projected/703c76ff-d327-45f7-a9ae-2d60d7629d31-kube-api-access-d8tv5\") pod \"placement-operator-controller-manager-78f8948974-brn9f\" (UID: \"703c76ff-d327-45f7-a9ae-2d60d7629d31\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.211239 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.211265 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrjs\" (UniqueName: \"kubernetes.io/projected/2df32a14-ab02-48cf-94e4-5dd7b72fdcff-kube-api-access-rrrjs\") pod \"ovn-operator-controller-manager-b6456fdb6-j48s5\" (UID: \"2df32a14-ab02-48cf-94e4-5dd7b72fdcff\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.211359 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.211398 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjq4\" (UniqueName: \"kubernetes.io/projected/37bafc12-f467-417c-b3f7-6fc18896b73f-kube-api-access-gkjq4\") pod \"telemetry-operator-controller-manager-5f89dd7bc5-2vmmm\" (UID: \"37bafc12-f467-417c-b3f7-6fc18896b73f\") " pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" Dec 09 14:41:46 crc kubenswrapper[4770]: E1209 14:41:46.212452 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:46 crc kubenswrapper[4770]: E1209 14:41:46.212525 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert podName:01836e5a-2708-4b73-b24d-79f804c8e0ef nodeName:}" failed. No retries permitted until 2025-12-09 14:41:47.212501365 +0000 UTC m=+1139.108703501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert") pod "infra-operator-controller-manager-78d48bff9d-55h7r" (UID: "01836e5a-2708-4b73-b24d-79f804c8e0ef") : secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:46 crc kubenswrapper[4770]: E1209 14:41:46.213110 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:46 crc kubenswrapper[4770]: E1209 14:41:46.213196 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert podName:5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:46.713169733 +0000 UTC m=+1138.609371869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5bk9l" (UID: "5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.249478 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8tv5\" (UniqueName: \"kubernetes.io/projected/703c76ff-d327-45f7-a9ae-2d60d7629d31-kube-api-access-d8tv5\") pod \"placement-operator-controller-manager-78f8948974-brn9f\" (UID: \"703c76ff-d327-45f7-a9ae-2d60d7629d31\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.319804 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.330592 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22l8z\" (UniqueName: \"kubernetes.io/projected/02f392e6-53d4-4bdb-bb7c-3ff1e29266bd-kube-api-access-22l8z\") pod \"swift-operator-controller-manager-9d58d64bc-6xd5q\" (UID: \"02f392e6-53d4-4bdb-bb7c-3ff1e29266bd\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.331381 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsg4t\" (UniqueName: \"kubernetes.io/projected/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-kube-api-access-jsg4t\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.331622 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrjs\" (UniqueName: \"kubernetes.io/projected/2df32a14-ab02-48cf-94e4-5dd7b72fdcff-kube-api-access-rrrjs\") pod \"ovn-operator-controller-manager-b6456fdb6-j48s5\" (UID: \"2df32a14-ab02-48cf-94e4-5dd7b72fdcff\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.332549 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.332701 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjq4\" (UniqueName: \"kubernetes.io/projected/37bafc12-f467-417c-b3f7-6fc18896b73f-kube-api-access-gkjq4\") pod \"telemetry-operator-controller-manager-5f89dd7bc5-2vmmm\" (UID: \"37bafc12-f467-417c-b3f7-6fc18896b73f\") " pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.343413 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.355382 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.365788 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.377969 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.495433 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-pglrm" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.738975 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.739924 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.749710 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.753173 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.755865 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:46 crc kubenswrapper[4770]: E1209 14:41:46.755981 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:46 crc kubenswrapper[4770]: E1209 14:41:46.766465 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert podName:5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:47.766437017 +0000 UTC m=+1139.662639163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5bk9l" (UID: "5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.781215 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hfg9s" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.863707 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.907784 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgwf\" (UniqueName: \"kubernetes.io/projected/d65c2397-adaa-461c-9a86-05901e7b3726-kube-api-access-lpgwf\") pod \"test-operator-controller-manager-5854674fcc-rfq2b\" (UID: \"d65c2397-adaa-461c-9a86-05901e7b3726\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.907851 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bflw\" (UniqueName: \"kubernetes.io/projected/7ff426e3-2995-401a-9587-b7277f96e1b3-kube-api-access-9bflw\") pod \"watcher-operator-controller-manager-667bd8d554-wdqn5\" (UID: \"7ff426e3-2995-401a-9587-b7277f96e1b3\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.929782 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.931098 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.940146 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.942321 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp"] Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.943663 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" Dec 09 14:41:46 crc kubenswrapper[4770]: I1209 14:41:46.946955 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp"] Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.009444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlqx7\" (UniqueName: \"kubernetes.io/projected/8d7e5182-a5a9-4daf-b268-f967a207932c-kube-api-access-mlqx7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c5xpp\" (UID: \"8d7e5182-a5a9-4daf-b268-f967a207932c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.009523 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr7jc\" (UniqueName: \"kubernetes.io/projected/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-kube-api-access-tr7jc\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.009695 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.009770 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpgwf\" (UniqueName: \"kubernetes.io/projected/d65c2397-adaa-461c-9a86-05901e7b3726-kube-api-access-lpgwf\") pod \"test-operator-controller-manager-5854674fcc-rfq2b\" (UID: \"d65c2397-adaa-461c-9a86-05901e7b3726\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.009816 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.009869 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bflw\" (UniqueName: \"kubernetes.io/projected/7ff426e3-2995-401a-9587-b7277f96e1b3-kube-api-access-9bflw\") pod \"watcher-operator-controller-manager-667bd8d554-wdqn5\" (UID: \"7ff426e3-2995-401a-9587-b7277f96e1b3\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.112855 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlqx7\" (UniqueName: \"kubernetes.io/projected/8d7e5182-a5a9-4daf-b268-f967a207932c-kube-api-access-mlqx7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c5xpp\" (UID: \"8d7e5182-a5a9-4daf-b268-f967a207932c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.113412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr7jc\" (UniqueName: \"kubernetes.io/projected/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-kube-api-access-tr7jc\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.113612 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.113426 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nn8qs" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.113766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.113678 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.114058 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.114118 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:47.614099662 +0000 UTC m=+1139.510301818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "metrics-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.114408 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.114560 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-65scj" Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.119074 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.119189 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:47.619154959 +0000 UTC m=+1139.515357135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "webhook-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.170056 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlqx7\" (UniqueName: \"kubernetes.io/projected/8d7e5182-a5a9-4daf-b268-f967a207932c-kube-api-access-mlqx7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c5xpp\" (UID: \"8d7e5182-a5a9-4daf-b268-f967a207932c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.174932 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr7jc\" (UniqueName: \"kubernetes.io/projected/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-kube-api-access-tr7jc\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.179451 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpgwf\" (UniqueName: \"kubernetes.io/projected/d65c2397-adaa-461c-9a86-05901e7b3726-kube-api-access-lpgwf\") pod \"test-operator-controller-manager-5854674fcc-rfq2b\" (UID: \"d65c2397-adaa-461c-9a86-05901e7b3726\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.190574 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bflw\" (UniqueName: \"kubernetes.io/projected/7ff426e3-2995-401a-9587-b7277f96e1b3-kube-api-access-9bflw\") pod \"watcher-operator-controller-manager-667bd8d554-wdqn5\" (UID: \"7ff426e3-2995-401a-9587-b7277f96e1b3\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.221577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.221820 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.221872 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert podName:01836e5a-2708-4b73-b24d-79f804c8e0ef nodeName:}" failed. No retries permitted until 2025-12-09 14:41:49.22185832 +0000 UTC m=+1141.118060456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert") pod "infra-operator-controller-manager-78d48bff9d-55h7r" (UID: "01836e5a-2708-4b73-b24d-79f804c8e0ef") : secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.229441 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.242871 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6"] Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.416014 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.435644 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.635984 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.636143 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.636299 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.636392 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:48.636373606 +0000 UTC m=+1140.532575742 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "metrics-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.636424 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.636495 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:48.636474038 +0000 UTC m=+1140.532676174 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "webhook-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: I1209 14:41:47.850990 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.851416 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:47 crc kubenswrapper[4770]: E1209 14:41:47.851466 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert podName:5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:49.851451991 +0000 UTC m=+1141.747654127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5bk9l" (UID: "5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.157886 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.203925 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.213545 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.259175 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.278582 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" event={"ID":"b3f7f32e-9cb9-42e8-8aca-86bb7b16479d","Type":"ContainerStarted","Data":"7bc1eee99ded199da68dc0c03837022516b3f7a102e9fde8defa760b2356d628"} Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.286555 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" event={"ID":"fadaf4e5-6b66-4dc3-b51e-e4700db03792","Type":"ContainerStarted","Data":"0572dc091972fd0bc6dea6d63b63e1a735d94a503b3a460d2832a5a12c3ffaef"} Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.296105 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" event={"ID":"5b04f571-f2cc-4486-91c9-d6f9f710f7fd","Type":"ContainerStarted","Data":"dc624c05df0432d2e35bbbdf8479fb1a4d426005a882e7c3057e72c1e4be3d48"} Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.305323 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" event={"ID":"218b9184-d581-40d1-bc52-734507d47b65","Type":"ContainerStarted","Data":"0b658847ac3f931e1e624734d4f49683556537b70614daae83b20e234596a80f"} Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.307464 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" event={"ID":"e3ac9095-4890-4130-b7f2-00c7927f6890","Type":"ContainerStarted","Data":"71eb90f5eabfb2e69eef58d379a2868add9520d72a63edccf54bb01af9e01302"} Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.395651 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f"] Dec 09 14:41:48 crc kubenswrapper[4770]: W1209 14:41:48.404860 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c377cc4_9030_4cc7_96b9_68d9634e24da.slice/crio-ee1b1912f4cf7dbbcaaf180c07685d9c0907ed7a57270da3c31b5865f8046a7e WatchSource:0}: Error finding container ee1b1912f4cf7dbbcaaf180c07685d9c0907ed7a57270da3c31b5865f8046a7e: Status 404 returned error can't find the container with id ee1b1912f4cf7dbbcaaf180c07685d9c0907ed7a57270da3c31b5865f8046a7e Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.420969 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv"] Dec 09 14:41:48 crc kubenswrapper[4770]: W1209 14:41:48.427839 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda697db6f_78dd_4f87_bafb_b1ad6ddfa241.slice/crio-4ff872b51a17dd7dc830a2d7d7c7281da2e73dff47c5f62a829467aacb68c8f4 WatchSource:0}: Error finding container 4ff872b51a17dd7dc830a2d7d7c7281da2e73dff47c5f62a829467aacb68c8f4: Status 404 returned error can't find the container with id 4ff872b51a17dd7dc830a2d7d7c7281da2e73dff47c5f62a829467aacb68c8f4 Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.434856 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.442054 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.451758 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.642376 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.656417 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.685101 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.685169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.685385 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.685436 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:50.685418046 +0000 UTC m=+1142.581620182 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "webhook-server-cert" not found Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.685710 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.685762 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:50.685750315 +0000 UTC m=+1142.581952451 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "metrics-server-cert" not found Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.809426 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-brn9f"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.811623 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.817704 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c"] Dec 09 14:41:48 crc kubenswrapper[4770]: W1209 14:41:48.832694 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcc1318_bc40_49c5_b6e1_718c06af70f3.slice/crio-406f10ba3f5fbd757f9149f0fdf3a8f010c3935bc958620bb0750ab87ddbd823 WatchSource:0}: Error finding container 406f10ba3f5fbd757f9149f0fdf3a8f010c3935bc958620bb0750ab87ddbd823: Status 404 returned error can't find the container with id 406f10ba3f5fbd757f9149f0fdf3a8f010c3935bc958620bb0750ab87ddbd823 Dec 09 14:41:48 crc kubenswrapper[4770]: W1209 14:41:48.833756 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod847e2267_4743_4e6d_b76f_81bb0402a8e2.slice/crio-b42868683201f7eb8f63032604fb36a8412f81438400b4be7c3af2a9d06c03da WatchSource:0}: Error finding container b42868683201f7eb8f63032604fb36a8412f81438400b4be7c3af2a9d06c03da: Status 404 returned error can't find the container with id b42868683201f7eb8f63032604fb36a8412f81438400b4be7c3af2a9d06c03da Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.843952 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ffsdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-52xhp_openstack-operators(60c59cb7-e525-45d2-a544-4b9a2dc6bbab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: W1209 14:41:48.844203 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2df32a14_ab02_48cf_94e4_5dd7b72fdcff.slice/crio-100473169d588492f58f1388d722a0b435f7f89337cec9daf266d81c9fe2b019 WatchSource:0}: Error finding container 100473169d588492f58f1388d722a0b435f7f89337cec9daf266d81c9fe2b019: Status 404 returned error can't find the container with id 100473169d588492f58f1388d722a0b435f7f89337cec9daf266d81c9fe2b019 Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.852832 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ffsdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-52xhp_openstack-operators(60c59cb7-e525-45d2-a544-4b9a2dc6bbab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.853986 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpgwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-rfq2b_openstack-operators(d65c2397-adaa-461c-9a86-05901e7b3726): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.854481 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" podUID="60c59cb7-e525-45d2-a544-4b9a2dc6bbab" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.854645 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrrjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-j48s5_openstack-operators(2df32a14-ab02-48cf-94e4-5dd7b72fdcff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.855215 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5"] Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.855814 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpgwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-rfq2b_openstack-operators(d65c2397-adaa-461c-9a86-05901e7b3726): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.856438 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrrjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-j48s5_openstack-operators(2df32a14-ab02-48cf-94e4-5dd7b72fdcff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.857327 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" podUID="d65c2397-adaa-461c-9a86-05901e7b3726" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.857700 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" podUID="2df32a14-ab02-48cf-94e4-5dd7b72fdcff" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.859337 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bflw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-wdqn5_openstack-operators(7ff426e3-2995-401a-9587-b7277f96e1b3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.860390 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp"] Dec 09 14:41:48 crc kubenswrapper[4770]: W1209 14:41:48.866369 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7e5182_a5a9_4daf_b268_f967a207932c.slice/crio-627e3b4a1297d2319bfe08afb130de85f4bd2c11eed922891f537e1e64690d52 WatchSource:0}: Error finding container 627e3b4a1297d2319bfe08afb130de85f4bd2c11eed922891f537e1e64690d52: Status 404 returned error can't find the container with id 627e3b4a1297d2319bfe08afb130de85f4bd2c11eed922891f537e1e64690d52 Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.866363 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp"] Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.866615 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bflw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-wdqn5_openstack-operators(7ff426e3-2995-401a-9587-b7277f96e1b3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.866630 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84j9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-9ff9c_openstack-operators(5dcc1318-bc40-49c5-b6e1-718c06af70f3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.867702 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" podUID="7ff426e3-2995-401a-9587-b7277f96e1b3" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.868907 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlqx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-c5xpp_openstack-operators(8d7e5182-a5a9-4daf-b268-f967a207932c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.868994 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84j9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-9ff9c_openstack-operators(5dcc1318-bc40-49c5-b6e1-718c06af70f3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.870212 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" podUID="5dcc1318-bc40-49c5-b6e1-718c06af70f3" Dec 09 14:41:48 crc kubenswrapper[4770]: E1209 14:41:48.870266 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" podUID="8d7e5182-a5a9-4daf-b268-f967a207932c" Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.870768 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b"] Dec 09 14:41:48 crc kubenswrapper[4770]: I1209 14:41:48.875495 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5"] Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.318162 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" event={"ID":"5dcc1318-bc40-49c5-b6e1-718c06af70f3","Type":"ContainerStarted","Data":"406f10ba3f5fbd757f9149f0fdf3a8f010c3935bc958620bb0750ab87ddbd823"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.319112 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.319351 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.319434 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert podName:01836e5a-2708-4b73-b24d-79f804c8e0ef nodeName:}" failed. No retries permitted until 2025-12-09 14:41:53.319407965 +0000 UTC m=+1145.215610131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert") pod "infra-operator-controller-manager-78d48bff9d-55h7r" (UID: "01836e5a-2708-4b73-b24d-79f804c8e0ef") : secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.320377 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" event={"ID":"a697db6f-78dd-4f87-bafb-b1ad6ddfa241","Type":"ContainerStarted","Data":"4ff872b51a17dd7dc830a2d7d7c7281da2e73dff47c5f62a829467aacb68c8f4"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.327046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" event={"ID":"847e2267-4743-4e6d-b76f-81bb0402a8e2","Type":"ContainerStarted","Data":"b42868683201f7eb8f63032604fb36a8412f81438400b4be7c3af2a9d06c03da"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.329100 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" event={"ID":"1e594101-5c21-4a4a-8027-39449d107481","Type":"ContainerStarted","Data":"48d65c0b350ab4facf32f573f596a115615cd886e966f0f62b710263fe9844c8"} Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.329287 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" podUID="5dcc1318-bc40-49c5-b6e1-718c06af70f3" Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.336758 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" event={"ID":"2df32a14-ab02-48cf-94e4-5dd7b72fdcff","Type":"ContainerStarted","Data":"100473169d588492f58f1388d722a0b435f7f89337cec9daf266d81c9fe2b019"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.354584 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" event={"ID":"37bafc12-f467-417c-b3f7-6fc18896b73f","Type":"ContainerStarted","Data":"c7943837936b20f945655c0d536a3abb7638af9654b2b0a772f49ea3ca8194a5"} Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.359969 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" podUID="2df32a14-ab02-48cf-94e4-5dd7b72fdcff" Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.361302 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" event={"ID":"8d7e5182-a5a9-4daf-b268-f967a207932c","Type":"ContainerStarted","Data":"627e3b4a1297d2319bfe08afb130de85f4bd2c11eed922891f537e1e64690d52"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.363586 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" event={"ID":"7ff426e3-2995-401a-9587-b7277f96e1b3","Type":"ContainerStarted","Data":"2ee42f7cb0960efef50699209297cfc554750008c2385bd40f0d8c20e02cee91"} Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.364863 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" podUID="8d7e5182-a5a9-4daf-b268-f967a207932c" Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.366441 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" event={"ID":"60c59cb7-e525-45d2-a544-4b9a2dc6bbab","Type":"ContainerStarted","Data":"ef2490c91b604ea307b5dd938f0786a167afc675bb6b9abe3a0bf139f790261b"} Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.367781 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" podUID="60c59cb7-e525-45d2-a544-4b9a2dc6bbab" Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.367985 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" podUID="7ff426e3-2995-401a-9587-b7277f96e1b3" Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.368540 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" event={"ID":"703c76ff-d327-45f7-a9ae-2d60d7629d31","Type":"ContainerStarted","Data":"4edc5af7270a70eb4239ddbec4032e68dbb3bfbda4e6f9f802cb691b4e92c17e"} Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.380869 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" podUID="d65c2397-adaa-461c-9a86-05901e7b3726" Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.393303 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" event={"ID":"706fc0bc-1168-477f-b4ce-b30ea5b70bcf","Type":"ContainerStarted","Data":"b74ca872286cd184eae4689e3d05977b684b4d68e43fe7627520be10b2f505c3"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.393398 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" event={"ID":"369cb688-d8db-443a-beef-6f0cf31b31cf","Type":"ContainerStarted","Data":"cd966d17ba3377431f5b07fa318f25017fa5774bd11fc442547302689886a9c8"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.393429 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" event={"ID":"d65c2397-adaa-461c-9a86-05901e7b3726","Type":"ContainerStarted","Data":"6bb99d4fe7c04fe1cc6f90c76d1ae4e980275980a7e5433f8aecc73a4cde10de"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.393446 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" event={"ID":"6c377cc4-9030-4cc7-96b9-68d9634e24da","Type":"ContainerStarted","Data":"ee1b1912f4cf7dbbcaaf180c07685d9c0907ed7a57270da3c31b5865f8046a7e"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.393462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" event={"ID":"02f392e6-53d4-4bdb-bb7c-3ff1e29266bd","Type":"ContainerStarted","Data":"58e2becf1af0eac5a455a078bff53f1d310b32e19fe2fd2f34b12f08aa57e038"} Dec 09 14:41:49 crc kubenswrapper[4770]: I1209 14:41:49.928396 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.928571 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:49 crc kubenswrapper[4770]: E1209 14:41:49.928773 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert podName:5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:53.928753428 +0000 UTC m=+1145.824955584 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5bk9l" (UID: "5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.396752 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" podUID="8d7e5182-a5a9-4daf-b268-f967a207932c" Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.397438 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" podUID="60c59cb7-e525-45d2-a544-4b9a2dc6bbab" Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.397550 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" podUID="d65c2397-adaa-461c-9a86-05901e7b3726" Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.397864 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" podUID="5dcc1318-bc40-49c5-b6e1-718c06af70f3" Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.398041 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" podUID="7ff426e3-2995-401a-9587-b7277f96e1b3" Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.398273 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" podUID="2df32a14-ab02-48cf-94e4-5dd7b72fdcff" Dec 09 14:41:50 crc kubenswrapper[4770]: I1209 14:41:50.740624 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:50 crc kubenswrapper[4770]: I1209 14:41:50.740681 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.741262 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.741314 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:54.741299213 +0000 UTC m=+1146.637501349 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "webhook-server-cert" not found Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.741328 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 14:41:50 crc kubenswrapper[4770]: E1209 14:41:50.741418 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:41:54.741396246 +0000 UTC m=+1146.637598462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "metrics-server-cert" not found Dec 09 14:41:51 crc kubenswrapper[4770]: E1209 14:41:51.403145 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" podUID="d65c2397-adaa-461c-9a86-05901e7b3726" Dec 09 14:41:53 crc kubenswrapper[4770]: I1209 14:41:53.384399 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:41:53 crc kubenswrapper[4770]: E1209 14:41:53.384573 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:53 crc kubenswrapper[4770]: E1209 14:41:53.384962 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert podName:01836e5a-2708-4b73-b24d-79f804c8e0ef nodeName:}" failed. No retries permitted until 2025-12-09 14:42:01.384942387 +0000 UTC m=+1153.281144523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert") pod "infra-operator-controller-manager-78d48bff9d-55h7r" (UID: "01836e5a-2708-4b73-b24d-79f804c8e0ef") : secret "infra-operator-webhook-server-cert" not found Dec 09 14:41:53 crc kubenswrapper[4770]: I1209 14:41:53.996578 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:41:53 crc kubenswrapper[4770]: E1209 14:41:53.996823 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:53 crc kubenswrapper[4770]: E1209 14:41:53.996916 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert podName:5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975 nodeName:}" failed. No retries permitted until 2025-12-09 14:42:01.99689448 +0000 UTC m=+1153.893096616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5bk9l" (UID: "5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:41:54 crc kubenswrapper[4770]: I1209 14:41:54.810628 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:54 crc kubenswrapper[4770]: I1209 14:41:54.811080 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:41:54 crc kubenswrapper[4770]: E1209 14:41:54.810850 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 14:41:54 crc kubenswrapper[4770]: E1209 14:41:54.811252 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:42:02.811227503 +0000 UTC m=+1154.707429709 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "metrics-server-cert" not found Dec 09 14:41:54 crc kubenswrapper[4770]: E1209 14:41:54.811277 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 14:41:54 crc kubenswrapper[4770]: E1209 14:41:54.811335 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:42:02.811318817 +0000 UTC m=+1154.707521023 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "webhook-server-cert" not found Dec 09 14:42:01 crc kubenswrapper[4770]: I1209 14:42:01.418389 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:42:01 crc kubenswrapper[4770]: E1209 14:42:01.418598 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 14:42:01 crc kubenswrapper[4770]: E1209 14:42:01.418969 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert podName:01836e5a-2708-4b73-b24d-79f804c8e0ef nodeName:}" failed. No retries permitted until 2025-12-09 14:42:17.418951654 +0000 UTC m=+1169.315153790 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert") pod "infra-operator-controller-manager-78d48bff9d-55h7r" (UID: "01836e5a-2708-4b73-b24d-79f804c8e0ef") : secret "infra-operator-webhook-server-cert" not found Dec 09 14:42:02 crc kubenswrapper[4770]: I1209 14:42:02.028295 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:42:02 crc kubenswrapper[4770]: E1209 14:42:02.028535 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:42:02 crc kubenswrapper[4770]: E1209 14:42:02.028585 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert podName:5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975 nodeName:}" failed. No retries permitted until 2025-12-09 14:42:18.028570334 +0000 UTC m=+1169.924772470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5bk9l" (UID: "5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 14:42:02 crc kubenswrapper[4770]: I1209 14:42:02.842996 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:02 crc kubenswrapper[4770]: I1209 14:42:02.843057 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:02 crc kubenswrapper[4770]: E1209 14:42:02.843200 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 14:42:02 crc kubenswrapper[4770]: E1209 14:42:02.843241 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 14:42:02 crc kubenswrapper[4770]: E1209 14:42:02.843251 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:42:18.843234047 +0000 UTC m=+1170.739436183 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "webhook-server-cert" not found Dec 09 14:42:02 crc kubenswrapper[4770]: E1209 14:42:02.843374 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs podName:8fa6c3c5-bb85-4c66-b304-fa19ecb453e4 nodeName:}" failed. No retries permitted until 2025-12-09 14:42:18.84335125 +0000 UTC m=+1170.739553396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs") pod "openstack-operator-controller-manager-7c59bdd89-4cf5f" (UID: "8fa6c3c5-bb85-4c66-b304-fa19ecb453e4") : secret "metrics-server-cert" not found Dec 09 14:42:07 crc kubenswrapper[4770]: E1209 14:42:07.087095 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 09 14:42:07 crc kubenswrapper[4770]: E1209 14:42:07.087910 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tf6v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-fjnk7_openstack-operators(b3f7f32e-9cb9-42e8-8aca-86bb7b16479d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:08 crc kubenswrapper[4770]: E1209 14:42:08.421886 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87" Dec 09 14:42:08 crc kubenswrapper[4770]: E1209 14:42:08.422077 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2c78w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-r2rm9_openstack-operators(6c377cc4-9030-4cc7-96b9-68d9634e24da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:09 crc kubenswrapper[4770]: E1209 14:42:09.635325 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 09 14:42:09 crc kubenswrapper[4770]: E1209 14:42:09.635831 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbtlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-zvfn5_openstack-operators(369cb688-d8db-443a-beef-6f0cf31b31cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:10 crc kubenswrapper[4770]: E1209 14:42:10.410954 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 09 14:42:10 crc kubenswrapper[4770]: E1209 14:42:10.411148 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d8tv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-brn9f_openstack-operators(703c76ff-d327-45f7-a9ae-2d60d7629d31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:11 crc kubenswrapper[4770]: E1209 14:42:11.981140 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 09 14:42:11 crc kubenswrapper[4770]: E1209 14:42:11.981666 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hkbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-6k7ff_openstack-operators(e3ac9095-4890-4130-b7f2-00c7927f6890): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:14 crc kubenswrapper[4770]: I1209 14:42:14.243492 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:42:14 crc kubenswrapper[4770]: I1209 14:42:14.243566 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:42:15 crc kubenswrapper[4770]: E1209 14:42:15.639319 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 09 14:42:15 crc kubenswrapper[4770]: E1209 14:42:15.639970 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22l8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-6xd5q_openstack-operators(02f392e6-53d4-4bdb-bb7c-3ff1e29266bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:16 crc kubenswrapper[4770]: E1209 14:42:16.581835 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad" Dec 09 14:42:16 crc kubenswrapper[4770]: E1209 14:42:16.582126 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zd4bf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-6rp4l_openstack-operators(847e2267-4743-4e6d-b76f-81bb0402a8e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:17 crc kubenswrapper[4770]: I1209 14:42:17.428255 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:42:17 crc kubenswrapper[4770]: I1209 14:42:17.436460 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01836e5a-2708-4b73-b24d-79f804c8e0ef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-55h7r\" (UID: \"01836e5a-2708-4b73-b24d-79f804c8e0ef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:42:17 crc kubenswrapper[4770]: I1209 14:42:17.626666 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8nkwn" Dec 09 14:42:17 crc kubenswrapper[4770]: I1209 14:42:17.635506 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:42:17 crc kubenswrapper[4770]: E1209 14:42:17.995791 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 09 14:42:17 crc kubenswrapper[4770]: E1209 14:42:17.996120 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mxv4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-m4rlv_openstack-operators(a697db6f-78dd-4f87-bafb-b1ad6ddfa241): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.070793 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.074808 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5bk9l\" (UID: \"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.361185 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-z5rfg" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.370039 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.886375 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.886431 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.894932 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-metrics-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.897157 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8fa6c3c5-bb85-4c66-b304-fa19ecb453e4-webhook-certs\") pod \"openstack-operator-controller-manager-7c59bdd89-4cf5f\" (UID: \"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4\") " pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.966357 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-65scj" Dec 09 14:42:18 crc kubenswrapper[4770]: I1209 14:42:18.975243 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:19 crc kubenswrapper[4770]: E1209 14:42:19.962086 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8" Dec 09 14:42:19 crc kubenswrapper[4770]: E1209 14:42:19.962544 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bflw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-wdqn5_openstack-operators(7ff426e3-2995-401a-9587-b7277f96e1b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:23 crc kubenswrapper[4770]: E1209 14:42:23.328653 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 09 14:42:23 crc kubenswrapper[4770]: E1209 14:42:23.329174 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m782g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-lvzw2_openstack-operators(1e594101-5c21-4a4a-8027-39449d107481): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:23 crc kubenswrapper[4770]: E1209 14:42:23.916283 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 09 14:42:23 crc kubenswrapper[4770]: E1209 14:42:23.916680 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrrjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-j48s5_openstack-operators(2df32a14-ab02-48cf-94e4-5dd7b72fdcff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:25 crc kubenswrapper[4770]: E1209 14:42:25.219431 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 09 14:42:25 crc kubenswrapper[4770]: E1209 14:42:25.219848 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpgwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-rfq2b_openstack-operators(d65c2397-adaa-461c-9a86-05901e7b3726): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:26 crc kubenswrapper[4770]: E1209 14:42:26.229094 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 09 14:42:26 crc kubenswrapper[4770]: E1209 14:42:26.229692 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84j9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-9ff9c_openstack-operators(5dcc1318-bc40-49c5-b6e1-718c06af70f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:26 crc kubenswrapper[4770]: E1209 14:42:26.784099 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 09 14:42:26 crc kubenswrapper[4770]: E1209 14:42:26.784252 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlqx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-c5xpp_openstack-operators(8d7e5182-a5a9-4daf-b268-f967a207932c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:26 crc kubenswrapper[4770]: E1209 14:42:26.785450 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" podUID="8d7e5182-a5a9-4daf-b268-f967a207932c" Dec 09 14:42:27 crc kubenswrapper[4770]: I1209 14:42:27.313885 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l"] Dec 09 14:42:27 crc kubenswrapper[4770]: I1209 14:42:27.627207 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f"] Dec 09 14:42:27 crc kubenswrapper[4770]: I1209 14:42:27.681190 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r"] Dec 09 14:42:29 crc kubenswrapper[4770]: W1209 14:42:29.784896 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fa6c3c5_bb85_4c66_b304_fa19ecb453e4.slice/crio-0cee5df0716de6bfac99f58733f0b23424e8cc665c9efd266aa501987d1a5222 WatchSource:0}: Error finding container 0cee5df0716de6bfac99f58733f0b23424e8cc665c9efd266aa501987d1a5222: Status 404 returned error can't find the container with id 0cee5df0716de6bfac99f58733f0b23424e8cc665c9efd266aa501987d1a5222 Dec 09 14:42:29 crc kubenswrapper[4770]: W1209 14:42:29.792136 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d96dcbe_2cdc_4640_8c4b_8f5fc4e2f975.slice/crio-5880c4307d1226e2832edf889d470c43c614b8582499051cdaf666a16beaa628 WatchSource:0}: Error finding container 5880c4307d1226e2832edf889d470c43c614b8582499051cdaf666a16beaa628: Status 404 returned error can't find the container with id 5880c4307d1226e2832edf889d470c43c614b8582499051cdaf666a16beaa628 Dec 09 14:42:29 crc kubenswrapper[4770]: W1209 14:42:29.795865 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01836e5a_2708_4b73_b24d_79f804c8e0ef.slice/crio-7163861d7eab3d85a67408450b396f95219960fddbdeb515dbaa80149512e5a0 WatchSource:0}: Error finding container 7163861d7eab3d85a67408450b396f95219960fddbdeb515dbaa80149512e5a0: Status 404 returned error can't find the container with id 7163861d7eab3d85a67408450b396f95219960fddbdeb515dbaa80149512e5a0 Dec 09 14:42:29 crc kubenswrapper[4770]: I1209 14:42:29.937433 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" event={"ID":"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4","Type":"ContainerStarted","Data":"0cee5df0716de6bfac99f58733f0b23424e8cc665c9efd266aa501987d1a5222"} Dec 09 14:42:29 crc kubenswrapper[4770]: I1209 14:42:29.939487 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" event={"ID":"01836e5a-2708-4b73-b24d-79f804c8e0ef","Type":"ContainerStarted","Data":"7163861d7eab3d85a67408450b396f95219960fddbdeb515dbaa80149512e5a0"} Dec 09 14:42:29 crc kubenswrapper[4770]: I1209 14:42:29.940517 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" event={"ID":"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975","Type":"ContainerStarted","Data":"5880c4307d1226e2832edf889d470c43c614b8582499051cdaf666a16beaa628"} Dec 09 14:42:31 crc kubenswrapper[4770]: I1209 14:42:30.950348 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" event={"ID":"fadaf4e5-6b66-4dc3-b51e-e4700db03792","Type":"ContainerStarted","Data":"3590afeb53a525f051bbd054bbbde9abced591e8af926dd826d8dedc0880501e"} Dec 09 14:42:31 crc kubenswrapper[4770]: I1209 14:42:30.951567 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" event={"ID":"706fc0bc-1168-477f-b4ce-b30ea5b70bcf","Type":"ContainerStarted","Data":"d523bf0ae0ad789914ae4182b178422f96ab670ff728d14df00a1abc5a858631"} Dec 09 14:42:31 crc kubenswrapper[4770]: I1209 14:42:30.952913 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" event={"ID":"37bafc12-f467-417c-b3f7-6fc18896b73f","Type":"ContainerStarted","Data":"9a56d9ab984c46575db6937b910ad4fcf955113814a5283e24a425bef673b536"} Dec 09 14:42:31 crc kubenswrapper[4770]: I1209 14:42:30.954308 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" event={"ID":"218b9184-d581-40d1-bc52-734507d47b65","Type":"ContainerStarted","Data":"fdedcb9e5a6d0c464ddb68cddbcd0267c3b5e562f8f12b639c3e5a1b63b2d8d2"} Dec 09 14:42:31 crc kubenswrapper[4770]: I1209 14:42:30.955711 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" event={"ID":"60c59cb7-e525-45d2-a544-4b9a2dc6bbab","Type":"ContainerStarted","Data":"4d9722ade02b0a1927804fac7b220be46312fefcd97f204be3265a659f99eeaf"} Dec 09 14:42:31 crc kubenswrapper[4770]: I1209 14:42:30.957036 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" event={"ID":"5b04f571-f2cc-4486-91c9-d6f9f710f7fd","Type":"ContainerStarted","Data":"4a379ec3e0c55f6c64ca67ad62f150f0a6807a3f53b25ba169739bafe39023b7"} Dec 09 14:42:32 crc kubenswrapper[4770]: I1209 14:42:32.974944 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" event={"ID":"8fa6c3c5-bb85-4c66-b304-fa19ecb453e4","Type":"ContainerStarted","Data":"3c6e65228313303a2dc5d8707cf00306f7067ad70931e9f6c65bb5626577e436"} Dec 09 14:42:32 crc kubenswrapper[4770]: I1209 14:42:32.975432 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:33 crc kubenswrapper[4770]: I1209 14:42:33.016664 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" podStartSLOduration=47.016616848 podStartE2EDuration="47.016616848s" podCreationTimestamp="2025-12-09 14:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:42:33.002444353 +0000 UTC m=+1184.898646519" watchObservedRunningTime="2025-12-09 14:42:33.016616848 +0000 UTC m=+1184.912819004" Dec 09 14:42:38 crc kubenswrapper[4770]: I1209 14:42:38.984932 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7c59bdd89-4cf5f" Dec 09 14:42:39 crc kubenswrapper[4770]: E1209 14:42:39.589180 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" podUID="8d7e5182-a5a9-4daf-b268-f967a207932c" Dec 09 14:42:44 crc kubenswrapper[4770]: I1209 14:42:44.244053 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:42:44 crc kubenswrapper[4770]: I1209 14:42:44.244778 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:42:48 crc kubenswrapper[4770]: E1209 14:42:48.344410 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3221003141/1\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab" Dec 09 14:42:48 crc kubenswrapper[4770]: E1209 14:42:48.345195 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-267zn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-78d48bff9d-55h7r_openstack-operators(01836e5a-2708-4b73-b24d-79f804c8e0ef): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3221003141/1\": happened during read: context canceled" logger="UnhandledError" Dec 09 14:42:49 crc kubenswrapper[4770]: E1209 14:42:49.220365 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:49 crc kubenswrapper[4770]: E1209 14:42:49.220872 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m782g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-lvzw2_openstack-operators(1e594101-5c21-4a4a-8027-39449d107481): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 09 14:42:49 crc kubenswrapper[4770]: E1209 14:42:49.223088 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" podUID="1e594101-5c21-4a4a-8027-39449d107481" Dec 09 14:42:50 crc kubenswrapper[4770]: E1209 14:42:50.099068 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:50 crc kubenswrapper[4770]: E1209 14:42:50.099302 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hkbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-6k7ff_openstack-operators(e3ac9095-4890-4130-b7f2-00c7927f6890): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:50 crc kubenswrapper[4770]: E1209 14:42:50.100941 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" podUID="e3ac9095-4890-4130-b7f2-00c7927f6890" Dec 09 14:42:50 crc kubenswrapper[4770]: E1209 14:42:50.636892 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2\": context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:50 crc kubenswrapper[4770]: E1209 14:42:50.637315 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84j9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-9ff9c_openstack-operators(5dcc1318-bc40-49c5-b6e1-718c06af70f3): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2\": context canceled" logger="UnhandledError" Dec 09 14:42:50 crc kubenswrapper[4770]: E1209 14:42:50.639032 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2: Get \\\"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:723607448b0abc536cd883abffcf6942c1c562a48117db73f6fe693d99395ee2\\\": context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" podUID="5dcc1318-bc40-49c5-b6e1-718c06af70f3" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.473372 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.473547 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r8td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-68275_openstack-operators(218b9184-d581-40d1-bc52-734507d47b65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.474541 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.474759 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" podUID="218b9184-d581-40d1-bc52-734507d47b65" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.475108 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.475133 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3564937127/1\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.475295 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkjq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f89dd7bc5-2vmmm_openstack-operators(37bafc12-f467-417c-b3f7-6fc18896b73f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.475415 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n7vgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-twpm6_openstack-operators(fadaf4e5-6b66-4dc3-b51e-e4700db03792): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.475471 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gx7p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-zx7w6_openstack-operators(5b04f571-f2cc-4486-91c9-d6f9f710f7fd): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3564937127/1\": happened during read: context canceled" logger="UnhandledError" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.476186 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.476290 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9nd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-ljx9f_openstack-operators(706fc0bc-1168-477f-b4ce-b30ea5b70bcf): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.476565 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" podUID="fadaf4e5-6b66-4dc3-b51e-e4700db03792" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.476625 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" podUID="37bafc12-f467-417c-b3f7-6fc18896b73f" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.476582 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3564937127/1\\\": happened during read: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" podUID="5b04f571-f2cc-4486-91c9-d6f9f710f7fd" Dec 09 14:42:51 crc kubenswrapper[4770]: E1209 14:42:51.477842 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" podUID="706fc0bc-1168-477f-b4ce-b30ea5b70bcf" Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.177638 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" event={"ID":"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975","Type":"ContainerStarted","Data":"194a9df5cbb4156af2e05e40a056e021b72f8f2c8f26c7af6231fdc3daaca216"} Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.177982 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" event={"ID":"5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975","Type":"ContainerStarted","Data":"7fea149dab70c3bebcd78c0da6ecc0de8a318890216e3a1a03e8cbd8ee099e8d"} Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.178882 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.180788 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" event={"ID":"60c59cb7-e525-45d2-a544-4b9a2dc6bbab","Type":"ContainerStarted","Data":"46566e3d382a44ea5b9894c67e2004b405bac8b7d1f903dca1fbedf707e34815"} Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.181293 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.182053 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.184151 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.185122 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.205030 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" podStartSLOduration=45.35069385 podStartE2EDuration="1m7.204997933s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:42:29.796229564 +0000 UTC m=+1181.692431700" lastFinishedPulling="2025-12-09 14:42:51.650533647 +0000 UTC m=+1203.546735783" observedRunningTime="2025-12-09 14:42:52.202826824 +0000 UTC m=+1204.099028980" watchObservedRunningTime="2025-12-09 14:42:52.204997933 +0000 UTC m=+1204.101200079" Dec 09 14:42:52 crc kubenswrapper[4770]: I1209 14:42:52.276151 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-52xhp" podStartSLOduration=4.490432648 podStartE2EDuration="1m7.27612903s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.843797175 +0000 UTC m=+1140.739999311" lastFinishedPulling="2025-12-09 14:42:51.629493567 +0000 UTC m=+1203.525695693" observedRunningTime="2025-12-09 14:42:52.273428486 +0000 UTC m=+1204.169630642" watchObservedRunningTime="2025-12-09 14:42:52.27612903 +0000 UTC m=+1204.172331176" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.134365 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.134866 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mxv4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-m4rlv_openstack-operators(a697db6f-78dd-4f87-bafb-b1ad6ddfa241): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.136031 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" podUID="a697db6f-78dd-4f87-bafb-b1ad6ddfa241" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.187833 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" event={"ID":"fadaf4e5-6b66-4dc3-b51e-e4700db03792","Type":"ContainerStarted","Data":"a5dcb40335a81d6065e3c2f3d8108f6249359f0b8dad4a0a4fd87d389c8c8f70"} Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.188357 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.189446 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" event={"ID":"706fc0bc-1168-477f-b4ce-b30ea5b70bcf","Type":"ContainerStarted","Data":"95c37a2327b77aad279e176b5d9686df92aadc1dafe9ce707b6dc43d921946c7"} Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.190522 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.190839 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" event={"ID":"37bafc12-f467-417c-b3f7-6fc18896b73f","Type":"ContainerStarted","Data":"b072f8a08bc4a7015da4188b69f92cd5f2d78082a485fa50c1b71a8f631d6d00"} Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.191510 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.192306 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" event={"ID":"218b9184-d581-40d1-bc52-734507d47b65","Type":"ContainerStarted","Data":"f502e4f82a50e656a38573e018aebdf91d13ef0379521b6ebc8cda57010da329"} Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.192927 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.193841 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" event={"ID":"8d7e5182-a5a9-4daf-b268-f967a207932c","Type":"ContainerStarted","Data":"e33b1ba462adfcbfb72ad4ff761f0d0e1d210a9a442e1f5a3aac433475028ea9"} Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.194938 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.195480 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" event={"ID":"5b04f571-f2cc-4486-91c9-d6f9f710f7fd","Type":"ContainerStarted","Data":"7b14e6e2eb2f2c7935ec631ae950a9bb1b9ca57b9b70c558ac74dc4668d2ff68"} Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.195939 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.197464 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.197627 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.208505 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-twpm6" podStartSLOduration=36.898342702 podStartE2EDuration="1m8.208483559s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:47.298344432 +0000 UTC m=+1139.194546568" lastFinishedPulling="2025-12-09 14:42:18.608485259 +0000 UTC m=+1170.504687425" observedRunningTime="2025-12-09 14:42:53.203337529 +0000 UTC m=+1205.099539665" watchObservedRunningTime="2025-12-09 14:42:53.208483559 +0000 UTC m=+1205.104685695" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.228534 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-68275" podStartSLOduration=34.263820513 podStartE2EDuration="1m8.228513381s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.232397247 +0000 UTC m=+1140.128599383" lastFinishedPulling="2025-12-09 14:42:22.197090115 +0000 UTC m=+1174.093292251" observedRunningTime="2025-12-09 14:42:53.225976053 +0000 UTC m=+1205.122178189" watchObservedRunningTime="2025-12-09 14:42:53.228513381 +0000 UTC m=+1205.124715517" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.299081 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c5xpp" podStartSLOduration=3.47465386 podStartE2EDuration="1m7.299060522s" podCreationTimestamp="2025-12-09 14:41:46 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.868815662 +0000 UTC m=+1140.765017798" lastFinishedPulling="2025-12-09 14:42:52.693222324 +0000 UTC m=+1204.589424460" observedRunningTime="2025-12-09 14:42:53.273450729 +0000 UTC m=+1205.169652865" watchObservedRunningTime="2025-12-09 14:42:53.299060522 +0000 UTC m=+1205.195262658" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.314286 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-ljx9f" podStartSLOduration=33.386245367 podStartE2EDuration="1m8.306201455s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.388789873 +0000 UTC m=+1140.284992009" lastFinishedPulling="2025-12-09 14:42:23.308745961 +0000 UTC m=+1175.204948097" observedRunningTime="2025-12-09 14:42:53.298792645 +0000 UTC m=+1205.194994781" watchObservedRunningTime="2025-12-09 14:42:53.306201455 +0000 UTC m=+1205.202403581" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.336334 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-zx7w6" podStartSLOduration=37.952180142 podStartE2EDuration="1m8.336314551s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.224283498 +0000 UTC m=+1140.120485634" lastFinishedPulling="2025-12-09 14:42:18.608417887 +0000 UTC m=+1170.504620043" observedRunningTime="2025-12-09 14:42:53.331926721 +0000 UTC m=+1205.228128858" watchObservedRunningTime="2025-12-09 14:42:53.336314551 +0000 UTC m=+1205.232516687" Dec 09 14:42:53 crc kubenswrapper[4770]: I1209 14:42:53.353451 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f89dd7bc5-2vmmm" podStartSLOduration=38.415621961 podStartE2EDuration="1m8.353434454s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.67044773 +0000 UTC m=+1140.566649866" lastFinishedPulling="2025-12-09 14:42:18.608260223 +0000 UTC m=+1170.504462359" observedRunningTime="2025-12-09 14:42:53.350224488 +0000 UTC m=+1205.246426634" watchObservedRunningTime="2025-12-09 14:42:53.353434454 +0000 UTC m=+1205.249636590" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.441959 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1630199148/4\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.442136 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpgwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-rfq2b_openstack-operators(d65c2397-adaa-461c-9a86-05901e7b3726): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1630199148/4\": happened during read: context canceled" logger="UnhandledError" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.443445 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1630199148/4\\\": happened during read: context canceled\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" podUID="d65c2397-adaa-461c-9a86-05901e7b3726" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.535867 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.536033 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zd4bf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-6rp4l_openstack-operators(847e2267-4743-4e6d-b76f-81bb0402a8e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.537268 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" podUID="847e2267-4743-4e6d-b76f-81bb0402a8e2" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.912741 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.912872 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d8tv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-brn9f_openstack-operators(703c76ff-d327-45f7-a9ae-2d60d7629d31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:53 crc kubenswrapper[4770]: E1209 14:42:53.915012 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" podUID="703c76ff-d327-45f7-a9ae-2d60d7629d31" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.085575 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3221003141/1\\\": happened during read: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" podUID="01836e5a-2708-4b73-b24d-79f804c8e0ef" Dec 09 14:42:54 crc kubenswrapper[4770]: I1209 14:42:54.204808 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" event={"ID":"01836e5a-2708-4b73-b24d-79f804c8e0ef","Type":"ContainerStarted","Data":"37d3edf8c848d5bdb4e7bffc42f25118bf8bd39677d57a50c62905e71d3e922e"} Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.353091 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage4160112919/5\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.353312 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bflw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-wdqn5_openstack-operators(7ff426e3-2995-401a-9587-b7277f96e1b3): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage4160112919/5\": happened during read: context canceled" logger="UnhandledError" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.354502 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage4160112919/5\\\": happened during read: context canceled\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" podUID="7ff426e3-2995-401a-9587-b7277f96e1b3" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.359819 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab\\\"\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" podUID="01836e5a-2708-4b73-b24d-79f804c8e0ef" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.796858 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.796903 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.796998 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tf6v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-fjnk7_openstack-operators(b3f7f32e-9cb9-42e8-8aca-86bb7b16479d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.797143 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2c78w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-r2rm9_openstack-operators(6c377cc4-9030-4cc7-96b9-68d9634e24da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.798217 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" podUID="b3f7f32e-9cb9-42e8-8aca-86bb7b16479d" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.798242 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" podUID="6c377cc4-9030-4cc7-96b9-68d9634e24da" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.799185 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.799349 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbtlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-zvfn5_openstack-operators(369cb688-d8db-443a-beef-6f0cf31b31cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:42:54 crc kubenswrapper[4770]: E1209 14:42:54.801186 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" podUID="369cb688-d8db-443a-beef-6f0cf31b31cf" Dec 09 14:42:55 crc kubenswrapper[4770]: I1209 14:42:55.212509 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" event={"ID":"1e594101-5c21-4a4a-8027-39449d107481","Type":"ContainerStarted","Data":"0f7f1ebfe864b71ea81c2f625f18a095ef496ff6bffb756f28f73b35cb7a7e59"} Dec 09 14:42:55 crc kubenswrapper[4770]: I1209 14:42:55.214141 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" event={"ID":"a697db6f-78dd-4f87-bafb-b1ad6ddfa241","Type":"ContainerStarted","Data":"e818edb011da3c0c3996b19e4fdaf2b4bb9a331ac2a0975f018f6675dcaf0af0"} Dec 09 14:42:55 crc kubenswrapper[4770]: E1209 14:42:55.229877 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab\\\"\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" podUID="01836e5a-2708-4b73-b24d-79f804c8e0ef" Dec 09 14:42:56 crc kubenswrapper[4770]: I1209 14:42:56.223755 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" event={"ID":"1e594101-5c21-4a4a-8027-39449d107481","Type":"ContainerStarted","Data":"356cbcc7e49357b412cf1921031ababae16f38bb01df652106edd307c81e77f9"} Dec 09 14:42:56 crc kubenswrapper[4770]: I1209 14:42:56.224086 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" Dec 09 14:42:56 crc kubenswrapper[4770]: I1209 14:42:56.225412 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" event={"ID":"a697db6f-78dd-4f87-bafb-b1ad6ddfa241","Type":"ContainerStarted","Data":"c03a2f474a160250aba5c19c6701c78484dad1b0eb2d260bd84800115fb8573b"} Dec 09 14:42:56 crc kubenswrapper[4770]: I1209 14:42:56.225546 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" Dec 09 14:42:56 crc kubenswrapper[4770]: I1209 14:42:56.246520 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" podStartSLOduration=5.303677832 podStartE2EDuration="1m11.246497964s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.427931753 +0000 UTC m=+1140.324133879" lastFinishedPulling="2025-12-09 14:42:54.370751875 +0000 UTC m=+1206.266954011" observedRunningTime="2025-12-09 14:42:56.242356671 +0000 UTC m=+1208.138558817" watchObservedRunningTime="2025-12-09 14:42:56.246497964 +0000 UTC m=+1208.142700100" Dec 09 14:42:56 crc kubenswrapper[4770]: E1209 14:42:56.264008 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" podUID="02f392e6-53d4-4bdb-bb7c-3ff1e29266bd" Dec 09 14:42:56 crc kubenswrapper[4770]: I1209 14:42:56.277934 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" podStartSLOduration=5.339095301 podStartE2EDuration="1m11.277911725s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.42967712 +0000 UTC m=+1140.325879256" lastFinishedPulling="2025-12-09 14:42:54.368493544 +0000 UTC m=+1206.264695680" observedRunningTime="2025-12-09 14:42:56.269681112 +0000 UTC m=+1208.165883248" watchObservedRunningTime="2025-12-09 14:42:56.277911725 +0000 UTC m=+1208.174113861" Dec 09 14:42:56 crc kubenswrapper[4770]: E1209 14:42:56.565604 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" podUID="2df32a14-ab02-48cf-94e4-5dd7b72fdcff" Dec 09 14:42:57 crc kubenswrapper[4770]: I1209 14:42:57.233643 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" event={"ID":"02f392e6-53d4-4bdb-bb7c-3ff1e29266bd","Type":"ContainerStarted","Data":"d5f027af4a5284d37110725e41e0d713a05b71be21d95cf40cc335260bcb32cc"} Dec 09 14:42:57 crc kubenswrapper[4770]: I1209 14:42:57.235087 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" event={"ID":"2df32a14-ab02-48cf-94e4-5dd7b72fdcff","Type":"ContainerStarted","Data":"19f7c8e9ff87056ce54b997f71b8c5568ea055b07dd9f5f5c9d28d1c07d08d0c"} Dec 09 14:42:58 crc kubenswrapper[4770]: I1209 14:42:58.257865 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" event={"ID":"847e2267-4743-4e6d-b76f-81bb0402a8e2","Type":"ContainerStarted","Data":"c62f1227cf4ab903dc44a75456c61607a45e56745c213c3add0a2370441e3994"} Dec 09 14:42:58 crc kubenswrapper[4770]: I1209 14:42:58.259902 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" event={"ID":"e3ac9095-4890-4130-b7f2-00c7927f6890","Type":"ContainerStarted","Data":"544158a647e7f50718baa76507cc75f7c1b4329c7581043f93757027277fc05e"} Dec 09 14:42:58 crc kubenswrapper[4770]: I1209 14:42:58.302835 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" event={"ID":"703c76ff-d327-45f7-a9ae-2d60d7629d31","Type":"ContainerStarted","Data":"668622972f542f6f3fc3a2629b7ca3b61c7b9678448789caf984f2db589180ae"} Dec 09 14:42:58 crc kubenswrapper[4770]: I1209 14:42:58.304593 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" event={"ID":"369cb688-d8db-443a-beef-6f0cf31b31cf","Type":"ContainerStarted","Data":"1dd935d7b560646ecd298efa224ee922b655741270db08b3b0230a754a831fee"} Dec 09 14:42:58 crc kubenswrapper[4770]: I1209 14:42:58.306233 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" event={"ID":"b3f7f32e-9cb9-42e8-8aca-86bb7b16479d","Type":"ContainerStarted","Data":"a7bff3ebc098e3a3709b04d5aa732802cd5739d33e127424060939430cb64a3f"} Dec 09 14:42:58 crc kubenswrapper[4770]: I1209 14:42:58.307916 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" event={"ID":"6c377cc4-9030-4cc7-96b9-68d9634e24da","Type":"ContainerStarted","Data":"30f7ea54bc8c1440576c6164d86bee80135c1bc188dab0e2970350413c88663d"} Dec 09 14:42:58 crc kubenswrapper[4770]: I1209 14:42:58.378888 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5bk9l" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.322439 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" event={"ID":"e3ac9095-4890-4130-b7f2-00c7927f6890","Type":"ContainerStarted","Data":"c3e48761c06642513e42aed40df4298511d9f84388e0f7264bfeb164a9f25552"} Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.323198 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.325382 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" event={"ID":"703c76ff-d327-45f7-a9ae-2d60d7629d31","Type":"ContainerStarted","Data":"ce92dd79b92d65ed2704f2751447ebbe60a47e6c4ae6ced52c7111ae4a044274"} Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.325462 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.328159 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" event={"ID":"369cb688-d8db-443a-beef-6f0cf31b31cf","Type":"ContainerStarted","Data":"1d83d782b437e3af4d953c522d20b7e86faeab612ffbcff83e4a33630d10f9a3"} Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.328280 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.330174 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" event={"ID":"b3f7f32e-9cb9-42e8-8aca-86bb7b16479d","Type":"ContainerStarted","Data":"aeb45c23ad057568208aaac673776170cf71fb72a4fb9588ba362137c316ab3b"} Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.330286 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.331907 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" event={"ID":"6c377cc4-9030-4cc7-96b9-68d9634e24da","Type":"ContainerStarted","Data":"f011b6d1791b65721c6da6cbfedba949111d69c6b484e32af4f8bd541f083a7f"} Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.331943 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.334430 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" event={"ID":"02f392e6-53d4-4bdb-bb7c-3ff1e29266bd","Type":"ContainerStarted","Data":"430e6570619395c6bc85db5d9796ade09f019d4b0e04a595712e14766e649eb8"} Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.334656 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.342806 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" event={"ID":"847e2267-4743-4e6d-b76f-81bb0402a8e2","Type":"ContainerStarted","Data":"d43ee07337dcf9108fa1a8fd063ba442047b81cb69c70f75c8f8ec0875de1073"} Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.342945 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.344826 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" event={"ID":"2df32a14-ab02-48cf-94e4-5dd7b72fdcff","Type":"ContainerStarted","Data":"c958d541601991e2d82e452e45629508dc3e43652ff8d60e1b2f20262daa72f8"} Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.345201 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.345347 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" podStartSLOduration=5.665289724 podStartE2EDuration="1m14.345329766s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.224828992 +0000 UTC m=+1140.121031128" lastFinishedPulling="2025-12-09 14:42:56.904869034 +0000 UTC m=+1208.801071170" observedRunningTime="2025-12-09 14:42:59.338512391 +0000 UTC m=+1211.234714527" watchObservedRunningTime="2025-12-09 14:42:59.345329766 +0000 UTC m=+1211.241531902" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.403932 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" podStartSLOduration=4.607318143 podStartE2EDuration="1m14.403910252s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.663634796 +0000 UTC m=+1140.559836932" lastFinishedPulling="2025-12-09 14:42:58.460226905 +0000 UTC m=+1210.356429041" observedRunningTime="2025-12-09 14:42:59.371071503 +0000 UTC m=+1211.267273659" watchObservedRunningTime="2025-12-09 14:42:59.403910252 +0000 UTC m=+1211.300112388" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.409199 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" podStartSLOduration=6.419909151 podStartE2EDuration="1m14.409182335s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.83289933 +0000 UTC m=+1140.729101466" lastFinishedPulling="2025-12-09 14:42:56.822172504 +0000 UTC m=+1208.718374650" observedRunningTime="2025-12-09 14:42:59.402110203 +0000 UTC m=+1211.298312349" watchObservedRunningTime="2025-12-09 14:42:59.409182335 +0000 UTC m=+1211.305384471" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.418023 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" podStartSLOduration=6.003944906 podStartE2EDuration="1m14.418003984s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.407000426 +0000 UTC m=+1140.303202562" lastFinishedPulling="2025-12-09 14:42:56.821059514 +0000 UTC m=+1208.717261640" observedRunningTime="2025-12-09 14:42:59.417146911 +0000 UTC m=+1211.313349047" watchObservedRunningTime="2025-12-09 14:42:59.418003984 +0000 UTC m=+1211.314206110" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.444364 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" podStartSLOduration=6.045915592 podStartE2EDuration="1m14.444347687s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.423369649 +0000 UTC m=+1140.319571775" lastFinishedPulling="2025-12-09 14:42:56.821801734 +0000 UTC m=+1208.718003870" observedRunningTime="2025-12-09 14:42:59.442299892 +0000 UTC m=+1211.338502028" watchObservedRunningTime="2025-12-09 14:42:59.444347687 +0000 UTC m=+1211.340549823" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.470468 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" podStartSLOduration=5.7898188170000005 podStartE2EDuration="1m14.470452295s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.224338239 +0000 UTC m=+1140.120540375" lastFinishedPulling="2025-12-09 14:42:56.904971717 +0000 UTC m=+1208.801173853" observedRunningTime="2025-12-09 14:42:59.469510848 +0000 UTC m=+1211.365712994" watchObservedRunningTime="2025-12-09 14:42:59.470452295 +0000 UTC m=+1211.366654431" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.515790 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" podStartSLOduration=4.942833028 podStartE2EDuration="1m14.515765521s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.854558436 +0000 UTC m=+1140.750760572" lastFinishedPulling="2025-12-09 14:42:58.427490939 +0000 UTC m=+1210.323693065" observedRunningTime="2025-12-09 14:42:59.500836416 +0000 UTC m=+1211.397038552" watchObservedRunningTime="2025-12-09 14:42:59.515765521 +0000 UTC m=+1211.411967657" Dec 09 14:42:59 crc kubenswrapper[4770]: I1209 14:42:59.516780 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" podStartSLOduration=6.53103738 podStartE2EDuration="1m14.516772148s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.835815929 +0000 UTC m=+1140.732018065" lastFinishedPulling="2025-12-09 14:42:56.821550697 +0000 UTC m=+1208.717752833" observedRunningTime="2025-12-09 14:42:59.512830732 +0000 UTC m=+1211.409032868" watchObservedRunningTime="2025-12-09 14:42:59.516772148 +0000 UTC m=+1211.412974284" Dec 09 14:43:04 crc kubenswrapper[4770]: I1209 14:43:04.401999 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" event={"ID":"5dcc1318-bc40-49c5-b6e1-718c06af70f3","Type":"ContainerStarted","Data":"1166b3bcd6fc96ae0c9b718107d838748ab1582100e2681c03ef64c097d7ff7b"} Dec 09 14:43:04 crc kubenswrapper[4770]: I1209 14:43:04.402680 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" event={"ID":"5dcc1318-bc40-49c5-b6e1-718c06af70f3","Type":"ContainerStarted","Data":"f31ec42daee77e86abd6777db097000ba4224f404d2fb477d76598187b7c6f35"} Dec 09 14:43:04 crc kubenswrapper[4770]: I1209 14:43:04.402912 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" Dec 09 14:43:04 crc kubenswrapper[4770]: I1209 14:43:04.423697 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" podStartSLOduration=4.9189570830000005 podStartE2EDuration="1m19.423676766s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.866540391 +0000 UTC m=+1140.762742527" lastFinishedPulling="2025-12-09 14:43:03.371260064 +0000 UTC m=+1215.267462210" observedRunningTime="2025-12-09 14:43:04.42011619 +0000 UTC m=+1216.316318326" watchObservedRunningTime="2025-12-09 14:43:04.423676766 +0000 UTC m=+1216.319878902" Dec 09 14:43:05 crc kubenswrapper[4770]: I1209 14:43:05.695350 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6k7ff" Dec 09 14:43:05 crc kubenswrapper[4770]: I1209 14:43:05.839542 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r2rm9" Dec 09 14:43:05 crc kubenswrapper[4770]: I1209 14:43:05.959505 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-fjnk7" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.076765 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6rp4l" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.107918 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-m4rlv" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.182585 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-lvzw2" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.203316 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zvfn5" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.358824 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-brn9f" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.416359 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" event={"ID":"d65c2397-adaa-461c-9a86-05901e7b3726","Type":"ContainerStarted","Data":"65139b16521394a3c73516aa174951d494fad01ef2fb398a62ce0678d1978507"} Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.416395 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" event={"ID":"d65c2397-adaa-461c-9a86-05901e7b3726","Type":"ContainerStarted","Data":"72447b193797c4dc02b7643b10ae87ac61cad6262f925025dd2c7920a3611c64"} Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.416891 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.444538 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" podStartSLOduration=4.264744046 podStartE2EDuration="1m21.444518494s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.853859998 +0000 UTC m=+1140.750062134" lastFinishedPulling="2025-12-09 14:43:06.033634446 +0000 UTC m=+1217.929836582" observedRunningTime="2025-12-09 14:43:06.438763208 +0000 UTC m=+1218.334965364" watchObservedRunningTime="2025-12-09 14:43:06.444518494 +0000 UTC m=+1218.340720630" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.747067 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j48s5" Dec 09 14:43:06 crc kubenswrapper[4770]: I1209 14:43:06.753012 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-6xd5q" Dec 09 14:43:08 crc kubenswrapper[4770]: I1209 14:43:08.435905 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" event={"ID":"7ff426e3-2995-401a-9587-b7277f96e1b3","Type":"ContainerStarted","Data":"3256606d6f322fbfdbc9f0f8b88b8250615c5084be403d224b7529badd1c78ae"} Dec 09 14:43:08 crc kubenswrapper[4770]: I1209 14:43:08.436164 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" event={"ID":"7ff426e3-2995-401a-9587-b7277f96e1b3","Type":"ContainerStarted","Data":"e2eae9825e739d270eeb871c56339cec76955a43d3e1f660b1747f5b3e8bf457"} Dec 09 14:43:08 crc kubenswrapper[4770]: I1209 14:43:08.436321 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" Dec 09 14:43:08 crc kubenswrapper[4770]: I1209 14:43:08.455678 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" podStartSLOduration=5.079507051 podStartE2EDuration="1m23.455659219s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:41:48.859244214 +0000 UTC m=+1140.755446350" lastFinishedPulling="2025-12-09 14:43:07.235396382 +0000 UTC m=+1219.131598518" observedRunningTime="2025-12-09 14:43:08.451767534 +0000 UTC m=+1220.347969670" watchObservedRunningTime="2025-12-09 14:43:08.455659219 +0000 UTC m=+1220.351861355" Dec 09 14:43:10 crc kubenswrapper[4770]: I1209 14:43:10.450486 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" event={"ID":"01836e5a-2708-4b73-b24d-79f804c8e0ef","Type":"ContainerStarted","Data":"11d9d9df63b77f2904f32902ed4e67ff15c20dd54b16b43165d54da2832081e9"} Dec 09 14:43:10 crc kubenswrapper[4770]: I1209 14:43:10.450919 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:43:10 crc kubenswrapper[4770]: I1209 14:43:10.469872 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" podStartSLOduration=45.151843534 podStartE2EDuration="1m25.469856917s" podCreationTimestamp="2025-12-09 14:41:45 +0000 UTC" firstStartedPulling="2025-12-09 14:42:29.812005921 +0000 UTC m=+1181.708208077" lastFinishedPulling="2025-12-09 14:43:10.130019324 +0000 UTC m=+1222.026221460" observedRunningTime="2025-12-09 14:43:10.467524524 +0000 UTC m=+1222.363726660" watchObservedRunningTime="2025-12-09 14:43:10.469856917 +0000 UTC m=+1222.366059053" Dec 09 14:43:14 crc kubenswrapper[4770]: I1209 14:43:14.243105 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:43:14 crc kubenswrapper[4770]: I1209 14:43:14.243425 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:43:14 crc kubenswrapper[4770]: I1209 14:43:14.243471 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:43:14 crc kubenswrapper[4770]: I1209 14:43:14.244079 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f16eaf98d6441c99fac37159c836b0846fa6ac7bd81ba244c2067e5f830e8c2"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:43:14 crc kubenswrapper[4770]: I1209 14:43:14.244136 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://0f16eaf98d6441c99fac37159c836b0846fa6ac7bd81ba244c2067e5f830e8c2" gracePeriod=600 Dec 09 14:43:14 crc kubenswrapper[4770]: I1209 14:43:14.483960 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="0f16eaf98d6441c99fac37159c836b0846fa6ac7bd81ba244c2067e5f830e8c2" exitCode=0 Dec 09 14:43:14 crc kubenswrapper[4770]: I1209 14:43:14.484038 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"0f16eaf98d6441c99fac37159c836b0846fa6ac7bd81ba244c2067e5f830e8c2"} Dec 09 14:43:14 crc kubenswrapper[4770]: I1209 14:43:14.484343 4770 scope.go:117] "RemoveContainer" containerID="852cc9377060614876d64ddd5f4b7f4f0b5e6e1aa2ca6cd6b6ccad59c9c4ae81" Dec 09 14:43:15 crc kubenswrapper[4770]: I1209 14:43:15.495157 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"08e4c65cee400d2486f41106aee41be450d436f2cac9e02f916b74733c20d0e5"} Dec 09 14:43:15 crc kubenswrapper[4770]: I1209 14:43:15.932398 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9ff9c" Dec 09 14:43:17 crc kubenswrapper[4770]: I1209 14:43:17.420183 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-wdqn5" Dec 09 14:43:17 crc kubenswrapper[4770]: I1209 14:43:17.439016 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-rfq2b" Dec 09 14:43:17 crc kubenswrapper[4770]: I1209 14:43:17.643039 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-55h7r" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.318126 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lv2mf"] Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.328504 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.331945 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.334258 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.334641 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.335389 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5t6bm" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.342520 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lv2mf"] Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.404961 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6pfhp"] Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.405838 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjvjn\" (UniqueName: \"kubernetes.io/projected/61983ce8-3f93-40f4-9729-7cdff9b84ad8-kube-api-access-hjvjn\") pod \"dnsmasq-dns-675f4bcbfc-lv2mf\" (UID: \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.405954 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61983ce8-3f93-40f4-9729-7cdff9b84ad8-config\") pod \"dnsmasq-dns-675f4bcbfc-lv2mf\" (UID: \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.406340 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.415648 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.418272 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6pfhp"] Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.507387 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvz6\" (UniqueName: \"kubernetes.io/projected/ba1ac1e3-8aa6-433e-8582-2057528dfc89-kube-api-access-ktvz6\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.507463 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61983ce8-3f93-40f4-9729-7cdff9b84ad8-config\") pod \"dnsmasq-dns-675f4bcbfc-lv2mf\" (UID: \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.507499 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.507539 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-config\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.507558 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjvjn\" (UniqueName: \"kubernetes.io/projected/61983ce8-3f93-40f4-9729-7cdff9b84ad8-kube-api-access-hjvjn\") pod \"dnsmasq-dns-675f4bcbfc-lv2mf\" (UID: \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.508815 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61983ce8-3f93-40f4-9729-7cdff9b84ad8-config\") pod \"dnsmasq-dns-675f4bcbfc-lv2mf\" (UID: \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.530575 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjvjn\" (UniqueName: \"kubernetes.io/projected/61983ce8-3f93-40f4-9729-7cdff9b84ad8-kube-api-access-hjvjn\") pod \"dnsmasq-dns-675f4bcbfc-lv2mf\" (UID: \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.609276 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.609351 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-config\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.609418 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvz6\" (UniqueName: \"kubernetes.io/projected/ba1ac1e3-8aa6-433e-8582-2057528dfc89-kube-api-access-ktvz6\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.610960 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.611714 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-config\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.634333 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvz6\" (UniqueName: \"kubernetes.io/projected/ba1ac1e3-8aa6-433e-8582-2057528dfc89-kube-api-access-ktvz6\") pod \"dnsmasq-dns-78dd6ddcc-6pfhp\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.658141 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:43:34 crc kubenswrapper[4770]: I1209 14:43:34.732845 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:43:35 crc kubenswrapper[4770]: I1209 14:43:35.074836 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6pfhp"] Dec 09 14:43:35 crc kubenswrapper[4770]: I1209 14:43:35.148701 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lv2mf"] Dec 09 14:43:35 crc kubenswrapper[4770]: W1209 14:43:35.151559 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61983ce8_3f93_40f4_9729_7cdff9b84ad8.slice/crio-07d34ad74b158df0fcb71648993ff16b2bbdf3f0ba0c1af8c62821ea62a171b0 WatchSource:0}: Error finding container 07d34ad74b158df0fcb71648993ff16b2bbdf3f0ba0c1af8c62821ea62a171b0: Status 404 returned error can't find the container with id 07d34ad74b158df0fcb71648993ff16b2bbdf3f0ba0c1af8c62821ea62a171b0 Dec 09 14:43:35 crc kubenswrapper[4770]: I1209 14:43:35.652212 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" event={"ID":"ba1ac1e3-8aa6-433e-8582-2057528dfc89","Type":"ContainerStarted","Data":"8f8a8599ac9b24c4235ced1639d8926a5d232da18de1dd34af51d55e5d4cf846"} Dec 09 14:43:35 crc kubenswrapper[4770]: I1209 14:43:35.654029 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" event={"ID":"61983ce8-3f93-40f4-9729-7cdff9b84ad8","Type":"ContainerStarted","Data":"07d34ad74b158df0fcb71648993ff16b2bbdf3f0ba0c1af8c62821ea62a171b0"} Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.718040 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lv2mf"] Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.743669 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8c8gb"] Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.745324 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.751231 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8c8gb"] Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.857831 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-config\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.857912 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzs7z\" (UniqueName: \"kubernetes.io/projected/85ea185b-86cb-4c27-855b-f27876b91a20-kube-api-access-kzs7z\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.857974 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.959673 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.959778 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-config\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.959856 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzs7z\" (UniqueName: \"kubernetes.io/projected/85ea185b-86cb-4c27-855b-f27876b91a20-kube-api-access-kzs7z\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.960707 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-config\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.960742 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:37 crc kubenswrapper[4770]: I1209 14:43:37.991768 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzs7z\" (UniqueName: \"kubernetes.io/projected/85ea185b-86cb-4c27-855b-f27876b91a20-kube-api-access-kzs7z\") pod \"dnsmasq-dns-666b6646f7-8c8gb\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.058032 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6pfhp"] Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.072100 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.093389 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rr5s8"] Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.096272 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.120926 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rr5s8"] Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.163330 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-config\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.163401 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxpm\" (UniqueName: \"kubernetes.io/projected/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-kube-api-access-9nxpm\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.163486 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.266164 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxpm\" (UniqueName: \"kubernetes.io/projected/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-kube-api-access-9nxpm\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.266229 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.266291 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-config\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.267173 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-config\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.267920 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.293897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxpm\" (UniqueName: \"kubernetes.io/projected/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-kube-api-access-9nxpm\") pod \"dnsmasq-dns-57d769cc4f-rr5s8\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.415308 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.666953 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8c8gb"] Dec 09 14:43:38 crc kubenswrapper[4770]: W1209 14:43:38.677173 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85ea185b_86cb_4c27_855b_f27876b91a20.slice/crio-6b7f1889eaf7aec71642d7e0879ebe98d94af0bab41ba4a61201eaccf3afd907 WatchSource:0}: Error finding container 6b7f1889eaf7aec71642d7e0879ebe98d94af0bab41ba4a61201eaccf3afd907: Status 404 returned error can't find the container with id 6b7f1889eaf7aec71642d7e0879ebe98d94af0bab41ba4a61201eaccf3afd907 Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.693086 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rr5s8"] Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.693176 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" event={"ID":"85ea185b-86cb-4c27-855b-f27876b91a20","Type":"ContainerStarted","Data":"6b7f1889eaf7aec71642d7e0879ebe98d94af0bab41ba4a61201eaccf3afd907"} Dec 09 14:43:38 crc kubenswrapper[4770]: W1209 14:43:38.695386 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f64084c_eec9_4638_9bc2_ead5dd90d8b6.slice/crio-324080b81e4ee2ddfacba03eb1866ae8bc9c631effc04768cc94c2e1856369d7 WatchSource:0}: Error finding container 324080b81e4ee2ddfacba03eb1866ae8bc9c631effc04768cc94c2e1856369d7: Status 404 returned error can't find the container with id 324080b81e4ee2ddfacba03eb1866ae8bc9c631effc04768cc94c2e1856369d7 Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.914630 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.916109 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.918813 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.919148 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.921369 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4rpf8" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.921782 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.922100 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.922918 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.923121 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.931175 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.980430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.980539 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.980666 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.980712 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.980794 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.980844 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-config-data\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.980878 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-599fx\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-kube-api-access-599fx\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.980930 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.981020 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.981129 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:38 crc kubenswrapper[4770]: I1209 14:43:38.981190 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082364 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082427 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082452 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-config-data\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082478 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-599fx\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-kube-api-access-599fx\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082504 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082532 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082557 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082580 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082621 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082655 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.082699 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.083319 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.083444 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.084364 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.084589 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.085214 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-config-data\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.087284 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.088695 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.089233 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.089381 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.100153 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.100194 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ff190b9c014c406df55c6aa1d5eb46bda5c01a4221266cf4574e441f95c466d2/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.100557 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-599fx\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-kube-api-access-599fx\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.139807 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"rabbitmq-server-0\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.244432 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.244663 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.245693 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.250262 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5mt88" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.250388 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.250460 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.250689 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.250921 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.251352 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.251525 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.256842 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.291196 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31bb1b14-4de1-4586-8bde-d29afdaad6fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.291483 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.291612 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8tqs\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-kube-api-access-m8tqs\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.291710 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.291868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.291943 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.292069 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.292166 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.292243 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.292369 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31bb1b14-4de1-4586-8bde-d29afdaad6fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.292499 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.394605 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.394808 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.394837 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.394896 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31bb1b14-4de1-4586-8bde-d29afdaad6fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.394942 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.394987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31bb1b14-4de1-4586-8bde-d29afdaad6fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.395010 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.395037 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8tqs\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-kube-api-access-m8tqs\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.395071 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.395093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.395111 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.395856 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.396898 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.397175 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.397231 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.399955 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.401144 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.402555 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.402579 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/59eb480a7f90adad28486c2e8d3eef49ac145acc82eca555e619178fb850ca35/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.403319 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31bb1b14-4de1-4586-8bde-d29afdaad6fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.413420 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.413950 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31bb1b14-4de1-4586-8bde-d29afdaad6fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.414784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8tqs\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-kube-api-access-m8tqs\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.442507 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"rabbitmq-cell1-server-0\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.640602 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.704429 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" event={"ID":"8f64084c-eec9-4638-9bc2-ead5dd90d8b6","Type":"ContainerStarted","Data":"324080b81e4ee2ddfacba03eb1866ae8bc9c631effc04768cc94c2e1856369d7"} Dec 09 14:43:39 crc kubenswrapper[4770]: I1209 14:43:39.791291 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.096816 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:43:40 crc kubenswrapper[4770]: W1209 14:43:40.106241 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31bb1b14_4de1_4586_8bde_d29afdaad6fd.slice/crio-d9d578e145b22c4b38c8c7f35c26471eab8f74da2795077a24e36fa2e3ec176a WatchSource:0}: Error finding container d9d578e145b22c4b38c8c7f35c26471eab8f74da2795077a24e36fa2e3ec176a: Status 404 returned error can't find the container with id d9d578e145b22c4b38c8c7f35c26471eab8f74da2795077a24e36fa2e3ec176a Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.481784 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.484504 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.487460 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.487545 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.488061 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.488752 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fdpf8" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.495705 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.507377 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.615337 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d87ac62-20d5-476f-97d9-34d8698fc78f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.615376 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.615405 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.615444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjw97\" (UniqueName: \"kubernetes.io/projected/1d87ac62-20d5-476f-97d9-34d8698fc78f-kube-api-access-gjw97\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.615642 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d87ac62-20d5-476f-97d9-34d8698fc78f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.615916 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.616119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-78932ee8-8f7c-43a6-8204-c5b4096e7d5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-78932ee8-8f7c-43a6-8204-c5b4096e7d5c\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.616168 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d87ac62-20d5-476f-97d9-34d8698fc78f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.712276 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469","Type":"ContainerStarted","Data":"28d462eca7c6f34b8e24f5fd838250f145546551cbd55d7d968a290b473a6374"} Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.713532 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31bb1b14-4de1-4586-8bde-d29afdaad6fd","Type":"ContainerStarted","Data":"d9d578e145b22c4b38c8c7f35c26471eab8f74da2795077a24e36fa2e3ec176a"} Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.718067 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d87ac62-20d5-476f-97d9-34d8698fc78f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.718192 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.718256 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-78932ee8-8f7c-43a6-8204-c5b4096e7d5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-78932ee8-8f7c-43a6-8204-c5b4096e7d5c\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.718287 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d87ac62-20d5-476f-97d9-34d8698fc78f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.718317 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d87ac62-20d5-476f-97d9-34d8698fc78f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.718337 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.718374 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.718426 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjw97\" (UniqueName: \"kubernetes.io/projected/1d87ac62-20d5-476f-97d9-34d8698fc78f-kube-api-access-gjw97\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.719161 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d87ac62-20d5-476f-97d9-34d8698fc78f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.719202 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.719862 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.721105 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d87ac62-20d5-476f-97d9-34d8698fc78f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.723835 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d87ac62-20d5-476f-97d9-34d8698fc78f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.726943 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d87ac62-20d5-476f-97d9-34d8698fc78f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.730047 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.730092 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-78932ee8-8f7c-43a6-8204-c5b4096e7d5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-78932ee8-8f7c-43a6-8204-c5b4096e7d5c\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3a35cf5780a1cad5b8c38670e85619d8d0dcd7c0433736251367ca4fc1c4e2be/globalmount\"" pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.735258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjw97\" (UniqueName: \"kubernetes.io/projected/1d87ac62-20d5-476f-97d9-34d8698fc78f-kube-api-access-gjw97\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.764188 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-78932ee8-8f7c-43a6-8204-c5b4096e7d5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-78932ee8-8f7c-43a6-8204-c5b4096e7d5c\") pod \"openstack-galera-0\" (UID: \"1d87ac62-20d5-476f-97d9-34d8698fc78f\") " pod="openstack/openstack-galera-0" Dec 09 14:43:40 crc kubenswrapper[4770]: I1209 14:43:40.808737 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 14:43:41 crc kubenswrapper[4770]: I1209 14:43:41.460921 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.021913 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.032407 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.034583 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-qhrlv" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.035207 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.036591 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.036592 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.037367 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.086520 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.090641 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.093759 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.094074 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.094266 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9wjl6" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.094883 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.158643 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.158718 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.158778 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.158850 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njtp\" (UniqueName: \"kubernetes.io/projected/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-kube-api-access-7njtp\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.158879 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.158913 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.158939 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.158974 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4af02138-8095-41cb-9a00-66e91a3f5f03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4af02138-8095-41cb-9a00-66e91a3f5f03\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.259901 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc21963-c17e-4378-938e-200a8497203e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260290 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fc21963-c17e-4378-938e-200a8497203e-kolla-config\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260341 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260438 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260485 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvj9t\" (UniqueName: \"kubernetes.io/projected/6fc21963-c17e-4378-938e-200a8497203e-kube-api-access-hvj9t\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260555 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260700 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njtp\" (UniqueName: \"kubernetes.io/projected/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-kube-api-access-7njtp\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260757 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fc21963-c17e-4378-938e-200a8497203e-config-data\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260780 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260820 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260844 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.260965 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4af02138-8095-41cb-9a00-66e91a3f5f03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4af02138-8095-41cb-9a00-66e91a3f5f03\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.261031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc21963-c17e-4378-938e-200a8497203e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.261300 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.262211 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.262252 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.266132 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.266549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.282949 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.284873 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.284933 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4af02138-8095-41cb-9a00-66e91a3f5f03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4af02138-8095-41cb-9a00-66e91a3f5f03\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1d0da9fa0bf64584ee8da971adfa1f2b6139c5d41b9431d330dc9bfab4818f3c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.285184 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njtp\" (UniqueName: \"kubernetes.io/projected/b8f05019-cb7b-46bb-bb57-2f8c6a9bba53-kube-api-access-7njtp\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.363180 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fc21963-c17e-4378-938e-200a8497203e-config-data\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.363277 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc21963-c17e-4378-938e-200a8497203e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.363303 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc21963-c17e-4378-938e-200a8497203e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.363329 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fc21963-c17e-4378-938e-200a8497203e-kolla-config\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.363377 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvj9t\" (UniqueName: \"kubernetes.io/projected/6fc21963-c17e-4378-938e-200a8497203e-kube-api-access-hvj9t\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.364333 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fc21963-c17e-4378-938e-200a8497203e-config-data\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.364337 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fc21963-c17e-4378-938e-200a8497203e-kolla-config\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.366646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc21963-c17e-4378-938e-200a8497203e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.368027 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc21963-c17e-4378-938e-200a8497203e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.369417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4af02138-8095-41cb-9a00-66e91a3f5f03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4af02138-8095-41cb-9a00-66e91a3f5f03\") pod \"openstack-cell1-galera-0\" (UID: \"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53\") " pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.382404 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvj9t\" (UniqueName: \"kubernetes.io/projected/6fc21963-c17e-4378-938e-200a8497203e-kube-api-access-hvj9t\") pod \"memcached-0\" (UID: \"6fc21963-c17e-4378-938e-200a8497203e\") " pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.422286 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 14:43:42 crc kubenswrapper[4770]: I1209 14:43:42.662671 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.044666 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.046653 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.050807 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hmktn" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.062492 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.191837 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp96k\" (UniqueName: \"kubernetes.io/projected/95b1d2b0-6b25-4853-aae2-9cdc30773854-kube-api-access-mp96k\") pod \"kube-state-metrics-0\" (UID: \"95b1d2b0-6b25-4853-aae2-9cdc30773854\") " pod="openstack/kube-state-metrics-0" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.293592 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp96k\" (UniqueName: \"kubernetes.io/projected/95b1d2b0-6b25-4853-aae2-9cdc30773854-kube-api-access-mp96k\") pod \"kube-state-metrics-0\" (UID: \"95b1d2b0-6b25-4853-aae2-9cdc30773854\") " pod="openstack/kube-state-metrics-0" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.341960 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp96k\" (UniqueName: \"kubernetes.io/projected/95b1d2b0-6b25-4853-aae2-9cdc30773854-kube-api-access-mp96k\") pod \"kube-state-metrics-0\" (UID: \"95b1d2b0-6b25-4853-aae2-9cdc30773854\") " pod="openstack/kube-state-metrics-0" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.374066 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.834116 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.836001 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.840384 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.840611 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.845297 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-9gfz6" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.846151 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.854482 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 09 14:43:44 crc kubenswrapper[4770]: I1209 14:43:44.886689 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.008175 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52rpp\" (UniqueName: \"kubernetes.io/projected/08118160-2e03-4319-97ed-051b92b14c1e-kube-api-access-52rpp\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.008478 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/08118160-2e03-4319-97ed-051b92b14c1e-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.008750 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.008915 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/08118160-2e03-4319-97ed-051b92b14c1e-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.009034 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/08118160-2e03-4319-97ed-051b92b14c1e-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.009138 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.009203 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.110945 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/08118160-2e03-4319-97ed-051b92b14c1e-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.111010 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.111034 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.111069 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52rpp\" (UniqueName: \"kubernetes.io/projected/08118160-2e03-4319-97ed-051b92b14c1e-kube-api-access-52rpp\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.111120 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/08118160-2e03-4319-97ed-051b92b14c1e-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.111167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.111213 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/08118160-2e03-4319-97ed-051b92b14c1e-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.111958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/08118160-2e03-4319-97ed-051b92b14c1e-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.115218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/08118160-2e03-4319-97ed-051b92b14c1e-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.115575 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.116795 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.127938 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/08118160-2e03-4319-97ed-051b92b14c1e-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.130381 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/08118160-2e03-4319-97ed-051b92b14c1e-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.133132 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52rpp\" (UniqueName: \"kubernetes.io/projected/08118160-2e03-4319-97ed-051b92b14c1e-kube-api-access-52rpp\") pod \"alertmanager-metric-storage-0\" (UID: \"08118160-2e03-4319-97ed-051b92b14c1e\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.175016 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.462619 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.465265 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.467384 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.467692 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.469569 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.469775 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.469994 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.470262 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zfw4p" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.481663 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.623231 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.624084 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.624198 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.624897 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.624932 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.624971 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b9m7\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-kube-api-access-4b9m7\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.624998 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.625017 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.726981 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.727080 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.727117 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b9m7\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-kube-api-access-4b9m7\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.727146 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.727171 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.727235 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.727277 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.727348 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.728465 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.731512 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.732122 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.732252 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.733068 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.733109 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffb0c130d32fd7a4a3dbad94b288743aa1490975c711021e1be0373426019df3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.733588 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.734149 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.746213 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b9m7\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-kube-api-access-4b9m7\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.781585 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"prometheus-metric-storage-0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.787115 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 14:43:45 crc kubenswrapper[4770]: I1209 14:43:45.788182 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1d87ac62-20d5-476f-97d9-34d8698fc78f","Type":"ContainerStarted","Data":"804cfede593896b34dd86af677246946becc8d58ba45c668608e80ebf4b84f11"} Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.428609 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.432598 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.434890 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.435229 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.439391 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-x6plq" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.439638 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.439902 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.450964 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.539755 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3917b78f-5515-4149-82d3-96a981c77ac5-config\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.539882 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3917b78f-5515-4149-82d3-96a981c77ac5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.539966 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.540040 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2l8z\" (UniqueName: \"kubernetes.io/projected/3917b78f-5515-4149-82d3-96a981c77ac5-kube-api-access-r2l8z\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.540119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.540186 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.540303 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3917b78f-5515-4149-82d3-96a981c77ac5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.540348 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e3fbffe-afc3-4bce-b885-b6d2518901dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e3fbffe-afc3-4bce-b885-b6d2518901dc\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.641748 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.642831 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2l8z\" (UniqueName: \"kubernetes.io/projected/3917b78f-5515-4149-82d3-96a981c77ac5-kube-api-access-r2l8z\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.642977 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.643039 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.643131 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3917b78f-5515-4149-82d3-96a981c77ac5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.643168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e3fbffe-afc3-4bce-b885-b6d2518901dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e3fbffe-afc3-4bce-b885-b6d2518901dc\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.643203 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3917b78f-5515-4149-82d3-96a981c77ac5-config\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.643239 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3917b78f-5515-4149-82d3-96a981c77ac5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.644452 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3917b78f-5515-4149-82d3-96a981c77ac5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.650908 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.653057 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.653145 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e3fbffe-afc3-4bce-b885-b6d2518901dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e3fbffe-afc3-4bce-b885-b6d2518901dc\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b50845290cb54e10760f9629ad85b13713a3dde432ca72f807c2897bfcb3dc16/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.662459 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.663103 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.663570 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.664533 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.665841 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3917b78f-5515-4149-82d3-96a981c77ac5-config\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.668246 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.671593 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3917b78f-5515-4149-82d3-96a981c77ac5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.675821 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3917b78f-5515-4149-82d3-96a981c77ac5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.694269 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-slxb5"] Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.696299 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.698416 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2l8z\" (UniqueName: \"kubernetes.io/projected/3917b78f-5515-4149-82d3-96a981c77ac5-kube-api-access-r2l8z\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.699885 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-wrntm" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.700436 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.700451 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.733865 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5ng4w"] Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.737575 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.743657 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e3fbffe-afc3-4bce-b885-b6d2518901dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e3fbffe-afc3-4bce-b885-b6d2518901dc\") pod \"ovsdbserver-nb-0\" (UID: \"3917b78f-5515-4149-82d3-96a981c77ac5\") " pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744597 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eebc5e5-e737-4171-abed-1e04fa89b0b4-ovn-controller-tls-certs\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744678 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-etc-ovs\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744702 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f087e8e2-8532-4abc-925b-574ebb448bde-scripts\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744733 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-run-ovn\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744757 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eebc5e5-e737-4171-abed-1e04fa89b0b4-combined-ca-bundle\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744844 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-run\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744892 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-run\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744912 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-lib\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744942 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-log\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744960 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4q9d\" (UniqueName: \"kubernetes.io/projected/8eebc5e5-e737-4171-abed-1e04fa89b0b4-kube-api-access-p4q9d\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.744977 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-log-ovn\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.745029 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eebc5e5-e737-4171-abed-1e04fa89b0b4-scripts\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.745057 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jqbh\" (UniqueName: \"kubernetes.io/projected/f087e8e2-8532-4abc-925b-574ebb448bde-kube-api-access-8jqbh\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.758350 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-x6plq" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.765247 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-slxb5"] Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.766148 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.790788 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5ng4w"] Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.846960 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-etc-ovs\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847009 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-run-ovn\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847028 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f087e8e2-8532-4abc-925b-574ebb448bde-scripts\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eebc5e5-e737-4171-abed-1e04fa89b0b4-combined-ca-bundle\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847066 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-run\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847100 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-run\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847115 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-lib\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847142 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-log\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847162 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4q9d\" (UniqueName: \"kubernetes.io/projected/8eebc5e5-e737-4171-abed-1e04fa89b0b4-kube-api-access-p4q9d\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847177 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-log-ovn\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847232 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eebc5e5-e737-4171-abed-1e04fa89b0b4-scripts\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847258 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jqbh\" (UniqueName: \"kubernetes.io/projected/f087e8e2-8532-4abc-925b-574ebb448bde-kube-api-access-8jqbh\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.847289 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eebc5e5-e737-4171-abed-1e04fa89b0b4-ovn-controller-tls-certs\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.848415 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-run\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.848699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-run\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.848719 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-etc-ovs\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.848896 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-lib\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.849041 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-run-ovn\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.849303 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8eebc5e5-e737-4171-abed-1e04fa89b0b4-var-log-ovn\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.849391 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f087e8e2-8532-4abc-925b-574ebb448bde-var-log\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.851463 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eebc5e5-e737-4171-abed-1e04fa89b0b4-scripts\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.852060 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f087e8e2-8532-4abc-925b-574ebb448bde-scripts\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.864079 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eebc5e5-e737-4171-abed-1e04fa89b0b4-ovn-controller-tls-certs\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.865422 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eebc5e5-e737-4171-abed-1e04fa89b0b4-combined-ca-bundle\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.865650 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4q9d\" (UniqueName: \"kubernetes.io/projected/8eebc5e5-e737-4171-abed-1e04fa89b0b4-kube-api-access-p4q9d\") pod \"ovn-controller-slxb5\" (UID: \"8eebc5e5-e737-4171-abed-1e04fa89b0b4\") " pod="openstack/ovn-controller-slxb5" Dec 09 14:43:48 crc kubenswrapper[4770]: I1209 14:43:48.868699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jqbh\" (UniqueName: \"kubernetes.io/projected/f087e8e2-8532-4abc-925b-574ebb448bde-kube-api-access-8jqbh\") pod \"ovn-controller-ovs-5ng4w\" (UID: \"f087e8e2-8532-4abc-925b-574ebb448bde\") " pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:49 crc kubenswrapper[4770]: I1209 14:43:49.066376 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5" Dec 09 14:43:49 crc kubenswrapper[4770]: I1209 14:43:49.077879 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:43:52 crc kubenswrapper[4770]: I1209 14:43:52.837364 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 14:43:52 crc kubenswrapper[4770]: I1209 14:43:52.840043 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:52 crc kubenswrapper[4770]: I1209 14:43:52.842635 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 09 14:43:52 crc kubenswrapper[4770]: I1209 14:43:52.842828 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pmw8x" Dec 09 14:43:52 crc kubenswrapper[4770]: I1209 14:43:52.842987 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 09 14:43:52 crc kubenswrapper[4770]: I1209 14:43:52.843109 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 09 14:43:52 crc kubenswrapper[4770]: I1209 14:43:52.847770 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.046381 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.046494 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41190237-fd6b-45b3-b68e-ad67b77ea11d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.046557 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41190237-fd6b-45b3-b68e-ad67b77ea11d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.046902 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.047057 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkhj\" (UniqueName: \"kubernetes.io/projected/41190237-fd6b-45b3-b68e-ad67b77ea11d-kube-api-access-rqkhj\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.047093 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.047124 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04b85324-50a4-4601-accb-9ea9ca6b7879\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04b85324-50a4-4601-accb-9ea9ca6b7879\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.047174 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41190237-fd6b-45b3-b68e-ad67b77ea11d-config\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.148604 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41190237-fd6b-45b3-b68e-ad67b77ea11d-config\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.149223 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.149264 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41190237-fd6b-45b3-b68e-ad67b77ea11d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.149287 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41190237-fd6b-45b3-b68e-ad67b77ea11d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.149305 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.149376 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkhj\" (UniqueName: \"kubernetes.io/projected/41190237-fd6b-45b3-b68e-ad67b77ea11d-kube-api-access-rqkhj\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.149394 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.149472 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04b85324-50a4-4601-accb-9ea9ca6b7879\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04b85324-50a4-4601-accb-9ea9ca6b7879\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.150458 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41190237-fd6b-45b3-b68e-ad67b77ea11d-config\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.151636 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41190237-fd6b-45b3-b68e-ad67b77ea11d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.152907 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41190237-fd6b-45b3-b68e-ad67b77ea11d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.156339 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.156380 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04b85324-50a4-4601-accb-9ea9ca6b7879\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04b85324-50a4-4601-accb-9ea9ca6b7879\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4931644d85c7920f6147ef4934329f298784f3b49bd8e87a114e25c09b403a76/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.166741 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.166871 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.167151 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41190237-fd6b-45b3-b68e-ad67b77ea11d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.167898 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkhj\" (UniqueName: \"kubernetes.io/projected/41190237-fd6b-45b3-b68e-ad67b77ea11d-kube-api-access-rqkhj\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.190509 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04b85324-50a4-4601-accb-9ea9ca6b7879\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04b85324-50a4-4601-accb-9ea9ca6b7879\") pod \"ovsdbserver-sb-0\" (UID: \"41190237-fd6b-45b3-b68e-ad67b77ea11d\") " pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:53 crc kubenswrapper[4770]: I1209 14:43:53.473019 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.120319 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.121960 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.126049 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.126407 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.126049 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.126655 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.126711 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-tjcdt" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.152259 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.202915 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.202978 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.203075 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.203129 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4g8j\" (UniqueName: \"kubernetes.io/projected/dc16ff55-b814-4912-842a-2744c0450b51-kube-api-access-m4g8j\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.203227 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc16ff55-b814-4912-842a-2744c0450b51-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.307429 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.307522 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.307624 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.307678 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4g8j\" (UniqueName: \"kubernetes.io/projected/dc16ff55-b814-4912-842a-2744c0450b51-kube-api-access-m4g8j\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.310761 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc16ff55-b814-4912-842a-2744c0450b51-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.311642 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc16ff55-b814-4912-842a-2744c0450b51-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.311995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.323873 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.325354 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/dc16ff55-b814-4912-842a-2744c0450b51-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.331721 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.332931 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.343014 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.343531 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.343843 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.356669 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4g8j\" (UniqueName: \"kubernetes.io/projected/dc16ff55-b814-4912-842a-2744c0450b51-kube-api-access-m4g8j\") pod \"cloudkitty-lokistack-distributor-664b687b54-75lfm\" (UID: \"dc16ff55-b814-4912-842a-2744c0450b51\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.396037 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.411989 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.412065 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.412115 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.412161 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.412207 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878f9\" (UniqueName: \"kubernetes.io/projected/3d8fd93c-ff55-4b03-9024-52af60e3e632-kube-api-access-878f9\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.412236 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8fd93c-ff55-4b03-9024-52af60e3e632-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.449167 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.512261 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.514024 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.514440 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.514549 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.514127 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.514745 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.514851 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-878f9\" (UniqueName: \"kubernetes.io/projected/3d8fd93c-ff55-4b03-9024-52af60e3e632-kube-api-access-878f9\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.515000 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8fd93c-ff55-4b03-9024-52af60e3e632-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.516072 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8fd93c-ff55-4b03-9024-52af60e3e632-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.516670 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.521245 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.522051 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.522847 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.532594 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.537108 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/3d8fd93c-ff55-4b03-9024-52af60e3e632-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.537674 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.561998 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-878f9\" (UniqueName: \"kubernetes.io/projected/3d8fd93c-ff55-4b03-9024-52af60e3e632-kube-api-access-878f9\") pod \"cloudkitty-lokistack-querier-5467947bf7-6tmxj\" (UID: \"3d8fd93c-ff55-4b03-9024-52af60e3e632\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.616475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfhw\" (UniqueName: \"kubernetes.io/projected/aeb389cf-bc24-4200-8561-a3c804f1d8c0-kube-api-access-6bfhw\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.616545 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb389cf-bc24-4200-8561-a3c804f1d8c0-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.616583 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.616617 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.616670 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.681961 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.684158 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.686949 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.687409 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.687532 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-5hxrk" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.687779 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.688786 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.689178 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.690047 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.725500 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.727872 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bfhw\" (UniqueName: \"kubernetes.io/projected/aeb389cf-bc24-4200-8561-a3c804f1d8c0-kube-api-access-6bfhw\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.729510 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.729684 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb389cf-bc24-4200-8561-a3c804f1d8c0-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.729836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.729990 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.730125 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8sg\" (UniqueName: \"kubernetes.io/projected/754ed0cc-ec25-45d4-b0d0-907d92e939fd-kube-api-access-lm8sg\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.730262 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.736036 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.736260 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.736366 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.736480 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.736647 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.736768 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.732649 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb389cf-bc24-4200-8561-a3c804f1d8c0-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.737957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.738936 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.739137 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/aeb389cf-bc24-4200-8561-a3c804f1d8c0-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.761889 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.785308 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.789687 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-rw588"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.791805 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.800084 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bfhw\" (UniqueName: \"kubernetes.io/projected/aeb389cf-bc24-4200-8561-a3c804f1d8c0-kube-api-access-6bfhw\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp\" (UID: \"aeb389cf-bc24-4200-8561-a3c804f1d8c0\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.805809 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-rw588"] Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839643 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839714 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839770 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839798 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839832 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839856 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839880 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fw2j\" (UniqueName: \"kubernetes.io/projected/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-kube-api-access-5fw2j\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839928 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839951 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839973 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.839998 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.840035 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm8sg\" (UniqueName: \"kubernetes.io/projected/754ed0cc-ec25-45d4-b0d0-907d92e939fd-kube-api-access-lm8sg\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.840053 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.840141 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.840167 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.840207 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.840228 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.840249 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: E1209 14:43:56.840411 4770 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Dec 09 14:43:56 crc kubenswrapper[4770]: E1209 14:43:56.840471 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tls-secret podName:754ed0cc-ec25-45d4-b0d0-907d92e939fd nodeName:}" failed. No retries permitted until 2025-12-09 14:43:57.340452921 +0000 UTC m=+1269.236655057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tls-secret") pod "cloudkitty-lokistack-gateway-bc75944f-86wfb" (UID: "754ed0cc-ec25-45d4-b0d0-907d92e939fd") : secret "cloudkitty-lokistack-gateway-http" not found Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.842671 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.843898 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.847770 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.849983 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.850039 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.850331 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.850343 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/754ed0cc-ec25-45d4-b0d0-907d92e939fd-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.860552 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm8sg\" (UniqueName: \"kubernetes.io/projected/754ed0cc-ec25-45d4-b0d0-907d92e939fd-kube-api-access-lm8sg\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944253 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944320 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944388 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944409 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944438 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944483 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944500 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944524 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.944559 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fw2j\" (UniqueName: \"kubernetes.io/projected/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-kube-api-access-5fw2j\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.946854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.948840 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.951370 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.951745 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.951810 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.952019 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.952599 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.957120 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.967190 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fw2j\" (UniqueName: \"kubernetes.io/projected/bf0f3c4c-bbc9-484d-8153-e12ad4118c9a-kube-api-access-5fw2j\") pod \"cloudkitty-lokistack-gateway-bc75944f-rw588\" (UID: \"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:56 crc kubenswrapper[4770]: I1209 14:43:56.975352 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.126572 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.315361 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.317209 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.324913 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.325295 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.327285 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.350932 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.404049 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.405646 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.408331 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.408591 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.412472 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.452633 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67dab40a-3d7c-4737-bca9-28dc6280071c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.452939 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.453047 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vhvz\" (UniqueName: \"kubernetes.io/projected/67dab40a-3d7c-4737-bca9-28dc6280071c-kube-api-access-4vhvz\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.453153 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.453293 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.453402 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.453507 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.453595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.528664 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.530043 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.539626 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.539837 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.551592 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555285 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555369 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555422 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vhvz\" (UniqueName: \"kubernetes.io/projected/67dab40a-3d7c-4737-bca9-28dc6280071c-kube-api-access-4vhvz\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555453 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhk7n\" (UniqueName: \"kubernetes.io/projected/1ac324ef-d65f-421c-b382-9c321ae7d447-kube-api-access-zhk7n\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555551 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555581 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555606 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555631 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555662 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555716 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555769 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555819 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67dab40a-3d7c-4737-bca9-28dc6280071c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555841 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac324ef-d65f-421c-b382-9c321ae7d447-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.555871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.556122 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.569141 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657585 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhk7n\" (UniqueName: \"kubernetes.io/projected/1ac324ef-d65f-421c-b382-9c321ae7d447-kube-api-access-zhk7n\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657652 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f02fdc-5866-4325-8d48-1333cd9a33d9-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657672 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657783 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657821 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657846 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657870 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.657887 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wks8\" (UniqueName: \"kubernetes.io/projected/24f02fdc-5866-4325-8d48-1333cd9a33d9-kube-api-access-8wks8\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.658094 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.658144 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac324ef-d65f-421c-b382-9c321ae7d447-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.658169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.658198 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.658248 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.658361 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.680349 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.681109 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac324ef-d65f-421c-b382-9c321ae7d447-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.681630 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/754ed0cc-ec25-45d4-b0d0-907d92e939fd-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-86wfb\" (UID: \"754ed0cc-ec25-45d4-b0d0-907d92e939fd\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.683832 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.683925 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.684557 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.685562 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhk7n\" (UniqueName: \"kubernetes.io/projected/1ac324ef-d65f-421c-b382-9c321ae7d447-kube-api-access-zhk7n\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.685581 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67dab40a-3d7c-4737-bca9-28dc6280071c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.692643 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.702267 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.703587 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.768612 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.768715 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.768813 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wks8\" (UniqueName: \"kubernetes.io/projected/24f02fdc-5866-4325-8d48-1333cd9a33d9-kube-api-access-8wks8\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.768978 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.769128 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.769190 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f02fdc-5866-4325-8d48-1333cd9a33d9-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.769226 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.769507 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.905125 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.958712 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.958839 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.959205 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/1ac324ef-d65f-421c-b382-9c321ae7d447-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1ac324ef-d65f-421c-b382-9c321ae7d447\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.959346 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/67dab40a-3d7c-4737-bca9-28dc6280071c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.959818 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vhvz\" (UniqueName: \"kubernetes.io/projected/67dab40a-3d7c-4737-bca9-28dc6280071c-kube-api-access-4vhvz\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"67dab40a-3d7c-4737-bca9-28dc6280071c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.961514 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.964066 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f02fdc-5866-4325-8d48-1333cd9a33d9-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.964536 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.964546 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.965672 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/24f02fdc-5866-4325-8d48-1333cd9a33d9-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.966441 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wks8\" (UniqueName: \"kubernetes.io/projected/24f02fdc-5866-4325-8d48-1333cd9a33d9-kube-api-access-8wks8\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.973898 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"24f02fdc-5866-4325-8d48-1333cd9a33d9\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.981301 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.981847 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:43:57 crc kubenswrapper[4770]: I1209 14:43:57.983039 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:44:03 crc kubenswrapper[4770]: E1209 14:44:03.079316 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 09 14:44:03 crc kubenswrapper[4770]: E1209 14:44:03.080071 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8tqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(31bb1b14-4de1-4586-8bde-d29afdaad6fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:03 crc kubenswrapper[4770]: E1209 14:44:03.081276 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" Dec 09 14:44:03 crc kubenswrapper[4770]: E1209 14:44:03.942788 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 14:44:03 crc kubenswrapper[4770]: E1209 14:44:03.942992 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nxpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rr5s8_openstack(8f64084c-eec9-4638-9bc2-ead5dd90d8b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:03 crc kubenswrapper[4770]: E1209 14:44:03.945045 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" podUID="8f64084c-eec9-4638-9bc2-ead5dd90d8b6" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.000004 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" podUID="8f64084c-eec9-4638-9bc2-ead5dd90d8b6" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.008372 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.015800 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.015988 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktvz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6pfhp_openstack(ba1ac1e3-8aa6-433e-8582-2057528dfc89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.017154 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" podUID="ba1ac1e3-8aa6-433e-8582-2057528dfc89" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.321153 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.321298 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjvjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-lv2mf_openstack(61983ce8-3f93-40f4-9729-7cdff9b84ad8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.322460 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" podUID="61983ce8-3f93-40f4-9729-7cdff9b84ad8" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.344886 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.345249 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-599fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0c3c3035-f6c2-4e3b-a244-f5cc01a7b469): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.346663 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" Dec 09 14:44:04 crc kubenswrapper[4770]: I1209 14:44:04.741239 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:44:04 crc kubenswrapper[4770]: E1209 14:44:04.973170 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" Dec 09 14:44:06 crc kubenswrapper[4770]: W1209 14:44:06.934010 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95b1d2b0_6b25_4853_aae2_9cdc30773854.slice/crio-3d3ab94fe68d32754aa6fe8bc0697ec4b5da9bf6b660dc9c9fb47671f63aca3a WatchSource:0}: Error finding container 3d3ab94fe68d32754aa6fe8bc0697ec4b5da9bf6b660dc9c9fb47671f63aca3a: Status 404 returned error can't find the container with id 3d3ab94fe68d32754aa6fe8bc0697ec4b5da9bf6b660dc9c9fb47671f63aca3a Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.349297 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.360267 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" event={"ID":"61983ce8-3f93-40f4-9729-7cdff9b84ad8","Type":"ContainerDied","Data":"07d34ad74b158df0fcb71648993ff16b2bbdf3f0ba0c1af8c62821ea62a171b0"} Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.360336 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lv2mf" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.367642 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95b1d2b0-6b25-4853-aae2-9cdc30773854","Type":"ContainerStarted","Data":"3d3ab94fe68d32754aa6fe8bc0697ec4b5da9bf6b660dc9c9fb47671f63aca3a"} Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.443379 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjvjn\" (UniqueName: \"kubernetes.io/projected/61983ce8-3f93-40f4-9729-7cdff9b84ad8-kube-api-access-hjvjn\") pod \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\" (UID: \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\") " Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.443440 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61983ce8-3f93-40f4-9729-7cdff9b84ad8-config\") pod \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\" (UID: \"61983ce8-3f93-40f4-9729-7cdff9b84ad8\") " Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.445114 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61983ce8-3f93-40f4-9729-7cdff9b84ad8-config" (OuterVolumeSpecName: "config") pod "61983ce8-3f93-40f4-9729-7cdff9b84ad8" (UID: "61983ce8-3f93-40f4-9729-7cdff9b84ad8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.449366 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61983ce8-3f93-40f4-9729-7cdff9b84ad8-kube-api-access-hjvjn" (OuterVolumeSpecName: "kube-api-access-hjvjn") pod "61983ce8-3f93-40f4-9729-7cdff9b84ad8" (UID: "61983ce8-3f93-40f4-9729-7cdff9b84ad8"). InnerVolumeSpecName "kube-api-access-hjvjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.545599 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjvjn\" (UniqueName: \"kubernetes.io/projected/61983ce8-3f93-40f4-9729-7cdff9b84ad8-kube-api-access-hjvjn\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.545632 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61983ce8-3f93-40f4-9729-7cdff9b84ad8-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.651272 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.680809 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.752082 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktvz6\" (UniqueName: \"kubernetes.io/projected/ba1ac1e3-8aa6-433e-8582-2057528dfc89-kube-api-access-ktvz6\") pod \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.752149 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-dns-svc\") pod \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.752221 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-config\") pod \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\" (UID: \"ba1ac1e3-8aa6-433e-8582-2057528dfc89\") " Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.753681 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba1ac1e3-8aa6-433e-8582-2057528dfc89" (UID: "ba1ac1e3-8aa6-433e-8582-2057528dfc89"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.757287 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-config" (OuterVolumeSpecName: "config") pod "ba1ac1e3-8aa6-433e-8582-2057528dfc89" (UID: "ba1ac1e3-8aa6-433e-8582-2057528dfc89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.764473 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1ac1e3-8aa6-433e-8582-2057528dfc89-kube-api-access-ktvz6" (OuterVolumeSpecName: "kube-api-access-ktvz6") pod "ba1ac1e3-8aa6-433e-8582-2057528dfc89" (UID: "ba1ac1e3-8aa6-433e-8582-2057528dfc89"). InnerVolumeSpecName "kube-api-access-ktvz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.784219 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lv2mf"] Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.796060 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lv2mf"] Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.959937 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktvz6\" (UniqueName: \"kubernetes.io/projected/ba1ac1e3-8aa6-433e-8582-2057528dfc89-kube-api-access-ktvz6\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.959962 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:07 crc kubenswrapper[4770]: I1209 14:44:07.959972 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1ac1e3-8aa6-433e-8582-2057528dfc89-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:08 crc kubenswrapper[4770]: I1209 14:44:08.381144 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" event={"ID":"ba1ac1e3-8aa6-433e-8582-2057528dfc89","Type":"ContainerDied","Data":"8f8a8599ac9b24c4235ced1639d8926a5d232da18de1dd34af51d55e5d4cf846"} Dec 09 14:44:08 crc kubenswrapper[4770]: I1209 14:44:08.381365 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6pfhp" Dec 09 14:44:08 crc kubenswrapper[4770]: I1209 14:44:08.382989 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fc21963-c17e-4378-938e-200a8497203e","Type":"ContainerStarted","Data":"13cafe6f99c19b6b9a17a676ae0f0555fdef86f9ea5fa21876df670a1e61fb26"} Dec 09 14:44:08 crc kubenswrapper[4770]: I1209 14:44:08.393554 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" event={"ID":"85ea185b-86cb-4c27-855b-f27876b91a20","Type":"ContainerStarted","Data":"0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b"} Dec 09 14:44:08 crc kubenswrapper[4770]: I1209 14:44:08.453131 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6pfhp"] Dec 09 14:44:08 crc kubenswrapper[4770]: I1209 14:44:08.462390 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6pfhp"] Dec 09 14:44:08 crc kubenswrapper[4770]: I1209 14:44:08.835848 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61983ce8-3f93-40f4-9729-7cdff9b84ad8" path="/var/lib/kubelet/pods/61983ce8-3f93-40f4-9729-7cdff9b84ad8/volumes" Dec 09 14:44:08 crc kubenswrapper[4770]: I1209 14:44:08.836242 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1ac1e3-8aa6-433e-8582-2057528dfc89" path="/var/lib/kubelet/pods/ba1ac1e3-8aa6-433e-8582-2057528dfc89/volumes" Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.155050 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.189001 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-slxb5"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.197616 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.204408 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.211959 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.218765 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 09 14:44:09 crc kubenswrapper[4770]: W1209 14:44:09.221559 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c6a1039_13ac_4d63_b0b0_3f54e7c3cee0.slice/crio-4c4c97ac27dfce5849cb07be02511bbf4b31f4c0c5c7c82d5d18b4e308004e77 WatchSource:0}: Error finding container 4c4c97ac27dfce5849cb07be02511bbf4b31f4c0c5c7c82d5d18b4e308004e77: Status 404 returned error can't find the container with id 4c4c97ac27dfce5849cb07be02511bbf4b31f4c0c5c7c82d5d18b4e308004e77 Dec 09 14:44:09 crc kubenswrapper[4770]: W1209 14:44:09.225210 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08118160_2e03_4319_97ed_051b92b14c1e.slice/crio-0d5e909675fae815ca53aca4dd5229c19941ce3bb0c97cceb5e6f7b70465356a WatchSource:0}: Error finding container 0d5e909675fae815ca53aca4dd5229c19941ce3bb0c97cceb5e6f7b70465356a: Status 404 returned error can't find the container with id 0d5e909675fae815ca53aca4dd5229c19941ce3bb0c97cceb5e6f7b70465356a Dec 09 14:44:09 crc kubenswrapper[4770]: W1209 14:44:09.225869 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eebc5e5_e737_4171_abed_1e04fa89b0b4.slice/crio-df83c6ceaddf85c36881a6d5e4b7d82fb04a6c04aa4cb5820d182caabe121264 WatchSource:0}: Error finding container df83c6ceaddf85c36881a6d5e4b7d82fb04a6c04aa4cb5820d182caabe121264: Status 404 returned error can't find the container with id df83c6ceaddf85c36881a6d5e4b7d82fb04a6c04aa4cb5820d182caabe121264 Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.225905 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-rw588"] Dec 09 14:44:09 crc kubenswrapper[4770]: W1209 14:44:09.227169 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67dab40a_3d7c_4737_bca9_28dc6280071c.slice/crio-6919a2f20d74edd5057110b7e7c0fda4d29a021597e4247bd6bb4356257ee419 WatchSource:0}: Error finding container 6919a2f20d74edd5057110b7e7c0fda4d29a021597e4247bd6bb4356257ee419: Status 404 returned error can't find the container with id 6919a2f20d74edd5057110b7e7c0fda4d29a021597e4247bd6bb4356257ee419 Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.233135 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm"] Dec 09 14:44:09 crc kubenswrapper[4770]: W1209 14:44:09.242286 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaeb389cf_bc24_4200_8561_a3c804f1d8c0.slice/crio-37ab62db99b30f7fd03633033bf0409106febc688191483c81d854771e7fdca5 WatchSource:0}: Error finding container 37ab62db99b30f7fd03633033bf0409106febc688191483c81d854771e7fdca5: Status 404 returned error can't find the container with id 37ab62db99b30f7fd03633033bf0409106febc688191483c81d854771e7fdca5 Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.406683 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" event={"ID":"dc16ff55-b814-4912-842a-2744c0450b51","Type":"ContainerStarted","Data":"8b25217c46d775a01c1091e79659a8eb3178b5c5dddb509a9551f96d4b5cf41b"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.547203 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerStarted","Data":"4c4c97ac27dfce5849cb07be02511bbf4b31f4c0c5c7c82d5d18b4e308004e77"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.554381 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1d87ac62-20d5-476f-97d9-34d8698fc78f","Type":"ContainerStarted","Data":"15e41d9e350eab62c920fb4fcf017208fc1572ddc46bd95710067dcfd113bd4e"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.569591 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"08118160-2e03-4319-97ed-051b92b14c1e","Type":"ContainerStarted","Data":"0d5e909675fae815ca53aca4dd5229c19941ce3bb0c97cceb5e6f7b70465356a"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.592463 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.604326 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-slxb5" event={"ID":"8eebc5e5-e737-4171-abed-1e04fa89b0b4","Type":"ContainerStarted","Data":"df83c6ceaddf85c36881a6d5e4b7d82fb04a6c04aa4cb5820d182caabe121264"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.610799 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"67dab40a-3d7c-4737-bca9-28dc6280071c","Type":"ContainerStarted","Data":"6919a2f20d74edd5057110b7e7c0fda4d29a021597e4247bd6bb4356257ee419"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.622147 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" event={"ID":"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a","Type":"ContainerStarted","Data":"5eefa344671dcd85166f578dce1ca18783f70392878f62311e701a20a0e6372a"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.632679 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.649823 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.655458 4770 generic.go:334] "Generic (PLEG): container finished" podID="85ea185b-86cb-4c27-855b-f27876b91a20" containerID="0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b" exitCode=0 Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.655523 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" event={"ID":"85ea185b-86cb-4c27-855b-f27876b91a20","Type":"ContainerDied","Data":"0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.659592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" event={"ID":"3d8fd93c-ff55-4b03-9024-52af60e3e632","Type":"ContainerStarted","Data":"c81f60f94586b2e550b4d57c9381d3ef9b43ea9492fc746e253a8a52d4c7417f"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.661059 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" event={"ID":"aeb389cf-bc24-4200-8561-a3c804f1d8c0","Type":"ContainerStarted","Data":"37ab62db99b30f7fd03633033bf0409106febc688191483c81d854771e7fdca5"} Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.673239 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.705419 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 14:44:09 crc kubenswrapper[4770]: I1209 14:44:09.763935 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5ng4w"] Dec 09 14:44:09 crc kubenswrapper[4770]: W1209 14:44:09.958346 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ac324ef_d65f_421c_b382_9c321ae7d447.slice/crio-0730edad2a0f4bde8bbe43c926adf6c7b3257eaba5de20a86f97c0df4e227d60 WatchSource:0}: Error finding container 0730edad2a0f4bde8bbe43c926adf6c7b3257eaba5de20a86f97c0df4e227d60: Status 404 returned error can't find the container with id 0730edad2a0f4bde8bbe43c926adf6c7b3257eaba5de20a86f97c0df4e227d60 Dec 09 14:44:09 crc kubenswrapper[4770]: W1209 14:44:09.960195 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41190237_fd6b_45b3_b68e_ad67b77ea11d.slice/crio-72f6e95173c354ee7f41ac1acbc40d5af4c4602bc3b0cc1ba2d2c250b48f9e11 WatchSource:0}: Error finding container 72f6e95173c354ee7f41ac1acbc40d5af4c4602bc3b0cc1ba2d2c250b48f9e11: Status 404 returned error can't find the container with id 72f6e95173c354ee7f41ac1acbc40d5af4c4602bc3b0cc1ba2d2c250b48f9e11 Dec 09 14:44:09 crc kubenswrapper[4770]: W1209 14:44:09.981490 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8f05019_cb7b_46bb_bb57_2f8c6a9bba53.slice/crio-f7d91bae15b2a85b54198e335ee525eda8f9918f8f2c0f9e45bdb3deee043a4b WatchSource:0}: Error finding container f7d91bae15b2a85b54198e335ee525eda8f9918f8f2c0f9e45bdb3deee043a4b: Status 404 returned error can't find the container with id f7d91bae15b2a85b54198e335ee525eda8f9918f8f2c0f9e45bdb3deee043a4b Dec 09 14:44:10 crc kubenswrapper[4770]: I1209 14:44:10.075136 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 14:44:10 crc kubenswrapper[4770]: I1209 14:44:10.675186 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ng4w" event={"ID":"f087e8e2-8532-4abc-925b-574ebb448bde","Type":"ContainerStarted","Data":"3654fb0954b660282c46560026b60d6d0de840bec6df7093bad3da7cd7293905"} Dec 09 14:44:10 crc kubenswrapper[4770]: I1209 14:44:10.680313 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"1ac324ef-d65f-421c-b382-9c321ae7d447","Type":"ContainerStarted","Data":"0730edad2a0f4bde8bbe43c926adf6c7b3257eaba5de20a86f97c0df4e227d60"} Dec 09 14:44:10 crc kubenswrapper[4770]: I1209 14:44:10.683478 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"24f02fdc-5866-4325-8d48-1333cd9a33d9","Type":"ContainerStarted","Data":"58719e86b054e686a2cea4094da86f2b806e7ba9b1c38cf2f9b3e5003dc2b47f"} Dec 09 14:44:10 crc kubenswrapper[4770]: I1209 14:44:10.685898 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" event={"ID":"754ed0cc-ec25-45d4-b0d0-907d92e939fd","Type":"ContainerStarted","Data":"1d85e62819ddfa890ed74ca986453b888e0e58028fcc84bd15ff5048e08fe303"} Dec 09 14:44:10 crc kubenswrapper[4770]: I1209 14:44:10.688885 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53","Type":"ContainerStarted","Data":"f7d91bae15b2a85b54198e335ee525eda8f9918f8f2c0f9e45bdb3deee043a4b"} Dec 09 14:44:10 crc kubenswrapper[4770]: I1209 14:44:10.690414 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"41190237-fd6b-45b3-b68e-ad67b77ea11d","Type":"ContainerStarted","Data":"72f6e95173c354ee7f41ac1acbc40d5af4c4602bc3b0cc1ba2d2c250b48f9e11"} Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.734915 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3917b78f-5515-4149-82d3-96a981c77ac5","Type":"ContainerStarted","Data":"fcf1e06b188e29cbe74b57097d0cf54727a29daedfbfe090a95dfd9115917e09"} Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.848743 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5v8bz"] Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.850415 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.861075 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.863883 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-config\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.863973 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-combined-ca-bundle\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.864082 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.864148 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-ovn-rundir\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.864178 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-ovs-rundir\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.864212 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmfmh\" (UniqueName: \"kubernetes.io/projected/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-kube-api-access-vmfmh\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.867675 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5v8bz"] Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.965438 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-ovn-rundir\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.965483 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-ovs-rundir\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.965515 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmfmh\" (UniqueName: \"kubernetes.io/projected/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-kube-api-access-vmfmh\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.965585 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-config\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.965658 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-combined-ca-bundle\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.965700 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.965854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-ovn-rundir\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.965880 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-ovs-rundir\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.966746 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-config\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.974485 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.985096 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-combined-ca-bundle\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:14 crc kubenswrapper[4770]: I1209 14:44:14.991979 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmfmh\" (UniqueName: \"kubernetes.io/projected/c324e2da-48b3-4772-bcf4-ebb0dc2543eb-kube-api-access-vmfmh\") pod \"ovn-controller-metrics-5v8bz\" (UID: \"c324e2da-48b3-4772-bcf4-ebb0dc2543eb\") " pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.035254 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8c8gb"] Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.066941 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-l5tc8"] Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.076085 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.079642 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.088814 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-l5tc8"] Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.170142 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.170441 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.170521 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pttsm\" (UniqueName: \"kubernetes.io/projected/8b5ca819-c66f-45ef-93d0-acebf8e297fc-kube-api-access-pttsm\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.170626 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-config\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.175172 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rr5s8"] Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.198948 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5v8bz" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.231141 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-vbhct"] Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.232683 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.234531 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.241260 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vbhct"] Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.272380 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.272713 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.273779 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.274110 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.274316 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pttsm\" (UniqueName: \"kubernetes.io/projected/8b5ca819-c66f-45ef-93d0-acebf8e297fc-kube-api-access-pttsm\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.274774 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-config\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.275526 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-config\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.275696 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.275815 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfdf\" (UniqueName: \"kubernetes.io/projected/1b62ee43-2d8d-4d9d-a333-ba36d6755816-kube-api-access-gnfdf\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.275924 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-config\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.276051 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.276882 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.294689 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pttsm\" (UniqueName: \"kubernetes.io/projected/8b5ca819-c66f-45ef-93d0-acebf8e297fc-kube-api-access-pttsm\") pod \"dnsmasq-dns-5bf47b49b7-l5tc8\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.394338 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.395474 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.395538 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.395598 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.395628 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfdf\" (UniqueName: \"kubernetes.io/projected/1b62ee43-2d8d-4d9d-a333-ba36d6755816-kube-api-access-gnfdf\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.395659 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-config\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.397791 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-config\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.397860 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.398084 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.398708 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.419669 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfdf\" (UniqueName: \"kubernetes.io/projected/1b62ee43-2d8d-4d9d-a333-ba36d6755816-kube-api-access-gnfdf\") pod \"dnsmasq-dns-8554648995-vbhct\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:15 crc kubenswrapper[4770]: I1209 14:44:15.554648 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:25 crc kubenswrapper[4770]: E1209 14:44:25.369144 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 09 14:44:25 crc kubenswrapper[4770]: E1209 14:44:25.369709 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5f7h567h67h88h7chch54dhd8h5b7hcfh577h66fh54bh7chbdh5h57fh5fbh659hdbhf9h8bhd4h564h5f5h56h5ch5h68bh597h5cdh5b4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvj9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(6fc21963-c17e-4378-938e-200a8497203e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:25 crc kubenswrapper[4770]: E1209 14:44:25.371006 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="6fc21963-c17e-4378-938e-200a8497203e" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.434293 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.592365 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxpm\" (UniqueName: \"kubernetes.io/projected/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-kube-api-access-9nxpm\") pod \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.592516 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-config\") pod \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.592541 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-dns-svc\") pod \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\" (UID: \"8f64084c-eec9-4638-9bc2-ead5dd90d8b6\") " Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.593270 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-config" (OuterVolumeSpecName: "config") pod "8f64084c-eec9-4638-9bc2-ead5dd90d8b6" (UID: "8f64084c-eec9-4638-9bc2-ead5dd90d8b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.593281 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f64084c-eec9-4638-9bc2-ead5dd90d8b6" (UID: "8f64084c-eec9-4638-9bc2-ead5dd90d8b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.613074 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-kube-api-access-9nxpm" (OuterVolumeSpecName: "kube-api-access-9nxpm") pod "8f64084c-eec9-4638-9bc2-ead5dd90d8b6" (UID: "8f64084c-eec9-4638-9bc2-ead5dd90d8b6"). InnerVolumeSpecName "kube-api-access-9nxpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.695223 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxpm\" (UniqueName: \"kubernetes.io/projected/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-kube-api-access-9nxpm\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.695264 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.695277 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f64084c-eec9-4638-9bc2-ead5dd90d8b6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.833694 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.833716 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rr5s8" event={"ID":"8f64084c-eec9-4638-9bc2-ead5dd90d8b6","Type":"ContainerDied","Data":"324080b81e4ee2ddfacba03eb1866ae8bc9c631effc04768cc94c2e1856369d7"} Dec 09 14:44:25 crc kubenswrapper[4770]: E1209 14:44:25.835498 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="6fc21963-c17e-4378-938e-200a8497203e" Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.912807 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rr5s8"] Dec 09 14:44:25 crc kubenswrapper[4770]: I1209 14:44:25.920657 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rr5s8"] Dec 09 14:44:26 crc kubenswrapper[4770]: I1209 14:44:26.600507 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f64084c-eec9-4638-9bc2-ead5dd90d8b6" path="/var/lib/kubelet/pods/8f64084c-eec9-4638-9bc2-ead5dd90d8b6/volumes" Dec 09 14:44:40 crc kubenswrapper[4770]: E1209 14:44:40.675247 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e" Dec 09 14:44:40 crc kubenswrapper[4770]: E1209 14:44:40.676004 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fw2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-bc75944f-rw588_openstack(bf0f3c4c-bbc9-484d-8153-e12ad4118c9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:40 crc kubenswrapper[4770]: E1209 14:44:40.677245 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" podUID="bf0f3c4c-bbc9-484d-8153-e12ad4118c9a" Dec 09 14:44:40 crc kubenswrapper[4770]: E1209 14:44:40.968926 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" podUID="bf0f3c4c-bbc9-484d-8153-e12ad4118c9a" Dec 09 14:44:41 crc kubenswrapper[4770]: E1209 14:44:41.530452 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 09 14:44:41 crc kubenswrapper[4770]: E1209 14:44:41.531435 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-878f9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-5467947bf7-6tmxj_openstack(3d8fd93c-ff55-4b03-9024-52af60e3e632): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:41 crc kubenswrapper[4770]: E1209 14:44:41.533071 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" podUID="3d8fd93c-ff55-4b03-9024-52af60e3e632" Dec 09 14:44:41 crc kubenswrapper[4770]: I1209 14:44:41.975646 4770 generic.go:334] "Generic (PLEG): container finished" podID="1d87ac62-20d5-476f-97d9-34d8698fc78f" containerID="15e41d9e350eab62c920fb4fcf017208fc1572ddc46bd95710067dcfd113bd4e" exitCode=0 Dec 09 14:44:41 crc kubenswrapper[4770]: I1209 14:44:41.975870 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1d87ac62-20d5-476f-97d9-34d8698fc78f","Type":"ContainerDied","Data":"15e41d9e350eab62c920fb4fcf017208fc1572ddc46bd95710067dcfd113bd4e"} Dec 09 14:44:41 crc kubenswrapper[4770]: E1209 14:44:41.977290 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" podUID="3d8fd93c-ff55-4b03-9024-52af60e3e632" Dec 09 14:44:42 crc kubenswrapper[4770]: E1209 14:44:42.011671 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 09 14:44:42 crc kubenswrapper[4770]: E1209 14:44:42.011927 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vhvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-ingester-0_openstack(67dab40a-3d7c-4737-bca9-28dc6280071c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:42 crc kubenswrapper[4770]: E1209 14:44:42.013152 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="67dab40a-3d7c-4737-bca9-28dc6280071c" Dec 09 14:44:42 crc kubenswrapper[4770]: E1209 14:44:42.538243 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 09 14:44:42 crc kubenswrapper[4770]: E1209 14:44:42.538420 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-query-frontend,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=query-frontend -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bfhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp_openstack(aeb389cf-bc24-4200-8561-a3c804f1d8c0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:42 crc kubenswrapper[4770]: E1209 14:44:42.539656 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" podUID="aeb389cf-bc24-4200-8561-a3c804f1d8c0" Dec 09 14:44:42 crc kubenswrapper[4770]: E1209 14:44:42.985593 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" podUID="aeb389cf-bc24-4200-8561-a3c804f1d8c0" Dec 09 14:44:42 crc kubenswrapper[4770]: E1209 14:44:42.986282 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="67dab40a-3d7c-4737-bca9-28dc6280071c" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.052425 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.053154 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-compactor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=compactor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhk7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-compactor-0_openstack(1ac324ef-d65f-421c-b382-9c321ae7d447): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.054431 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="1ac324ef-d65f-421c-b382-9c321ae7d447" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.244227 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.244409 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4g8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-664b687b54-75lfm_openstack(dc16ff55-b814-4912-842a-2744c0450b51): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.245595 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" podUID="dc16ff55-b814-4912-842a-2744c0450b51" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.254391 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.254794 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8tqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(31bb1b14-4de1-4586-8bde-d29afdaad6fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.256118 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.275184 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.275451 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wks8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-index-gateway-0_openstack(24f02fdc-5866-4325-8d48-1333cd9a33d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.276782 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="24f02fdc-5866-4325-8d48-1333cd9a33d9" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.767929 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.768179 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lm8sg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-bc75944f-86wfb_openstack(754ed0cc-ec25-45d4-b0d0-907d92e939fd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.769376 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" podUID="754ed0cc-ec25-45d4-b0d0-907d92e939fd" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.982212 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.982367 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/alertmanager/config/alertmanager.yaml.gz --config-envsubst-file=/etc/alertmanager/config_out/alertmanager.env.yaml --watched-dir=/etc/alertmanager/config],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:-1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52rpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(08118160-2e03-4319-97ed-051b92b14c1e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:44 crc kubenswrapper[4770]: E1209 14:44:44.983688 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/alertmanager-metric-storage-0" podUID="08118160-2e03-4319-97ed-051b92b14c1e" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.013147 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="1ac324ef-d65f-421c-b382-9c321ae7d447" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.013327 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="24f02fdc-5866-4325-8d48-1333cd9a33d9" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.013747 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="08118160-2e03-4319-97ed-051b92b14c1e" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.023446 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" podUID="754ed0cc-ec25-45d4-b0d0-907d92e939fd" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.027168 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" podUID="dc16ff55-b814-4912-842a-2744c0450b51" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.043439 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.043563 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbfh668h5f9h5f4h584h7h5dhffh68fh5bdhc4h58dh65bh5d9h5bbh5fdh86hfdhddhd7h88h8ch5d6h75h645h66fh6ch88h678hcdh57ch55fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jqbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-5ng4w_openstack(f087e8e2-8532-4abc-925b-574ebb448bde): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.044918 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-5ng4w" podUID="f087e8e2-8532-4abc-925b-574ebb448bde" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.067741 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.067979 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4b9m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.071563 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.336471 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.336649 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbfh668h5f9h5f4h584h7h5dhffh68fh5bdhc4h58dh65bh5d9h5bbh5fdh86hfdhddhd7h88h8ch5d6h75h645h66fh6ch88h678hcdh57ch55fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4q9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-slxb5_openstack(8eebc5e5-e737-4171-abed-1e04fa89b0b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.337791 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-slxb5" podUID="8eebc5e5-e737-4171-abed-1e04fa89b0b4" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.516702 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Dec 09 14:44:45 crc kubenswrapper[4770]: E1209 14:44:45.516932 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n597h87h95h686hffh5h59h576h566h6fh4h64dh58chd4h697h577h94h97h5d5h89h695h8chd9h56ch568h676h5c7h75h565h7ch74hbdq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2l8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(3917b78f-5515-4149-82d3-96a981c77ac5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:44:45 crc kubenswrapper[4770]: I1209 14:44:45.952782 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vbhct"] Dec 09 14:44:46 crc kubenswrapper[4770]: E1209 14:44:46.005898 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 14:44:46 crc kubenswrapper[4770]: E1209 14:44:46.005969 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 14:44:46 crc kubenswrapper[4770]: E1209 14:44:46.006124 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp96k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(95b1d2b0-6b25-4853-aae2-9cdc30773854): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:44:46 crc kubenswrapper[4770]: E1209 14:44:46.007541 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="95b1d2b0-6b25-4853-aae2-9cdc30773854" Dec 09 14:44:46 crc kubenswrapper[4770]: E1209 14:44:46.021309 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-slxb5" podUID="8eebc5e5-e737-4171-abed-1e04fa89b0b4" Dec 09 14:44:46 crc kubenswrapper[4770]: E1209 14:44:46.022084 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-5ng4w" podUID="f087e8e2-8532-4abc-925b-574ebb448bde" Dec 09 14:44:46 crc kubenswrapper[4770]: E1209 14:44:46.022233 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" Dec 09 14:44:46 crc kubenswrapper[4770]: E1209 14:44:46.022484 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="95b1d2b0-6b25-4853-aae2-9cdc30773854" Dec 09 14:44:46 crc kubenswrapper[4770]: I1209 14:44:46.713583 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-l5tc8"] Dec 09 14:44:46 crc kubenswrapper[4770]: I1209 14:44:46.788048 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5v8bz"] Dec 09 14:44:46 crc kubenswrapper[4770]: W1209 14:44:46.803873 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc324e2da_48b3_4772_bcf4_ebb0dc2543eb.slice/crio-3aeaf1451b78f9acf00e21e2f013ced274909cd9a3fe3cbe2db3523eb199e6f9 WatchSource:0}: Error finding container 3aeaf1451b78f9acf00e21e2f013ced274909cd9a3fe3cbe2db3523eb199e6f9: Status 404 returned error can't find the container with id 3aeaf1451b78f9acf00e21e2f013ced274909cd9a3fe3cbe2db3523eb199e6f9 Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.027755 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5v8bz" event={"ID":"c324e2da-48b3-4772-bcf4-ebb0dc2543eb","Type":"ContainerStarted","Data":"3aeaf1451b78f9acf00e21e2f013ced274909cd9a3fe3cbe2db3523eb199e6f9"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.029557 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" containerID="dab547dbaf6a318640cf88cbe024faa57c57f3c4f6c3afc164e86d009256651a" exitCode=0 Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.029608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vbhct" event={"ID":"1b62ee43-2d8d-4d9d-a333-ba36d6755816","Type":"ContainerDied","Data":"dab547dbaf6a318640cf88cbe024faa57c57f3c4f6c3afc164e86d009256651a"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.029661 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vbhct" event={"ID":"1b62ee43-2d8d-4d9d-a333-ba36d6755816","Type":"ContainerStarted","Data":"6bafaaef0c5745d6f7404ae41335e856643bc0fc2266f554a2a0dc7fec6e1e99"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.031838 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1d87ac62-20d5-476f-97d9-34d8698fc78f","Type":"ContainerStarted","Data":"0d0f358f3cbfe27eb4fb867420e18ddd7f5268617ad46cdb75feffb37dbff1f3"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.034065 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53","Type":"ContainerStarted","Data":"90cf8fad0fa03618e7238843b8b61a62be2c9dfa2ba43fdf0c2f57a27036729b"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.036313 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"41190237-fd6b-45b3-b68e-ad67b77ea11d","Type":"ContainerStarted","Data":"e3c1eaa579234dc177e3a6388618545f54931c7d8fb964d5df605f4703e4d1ca"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.041634 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" event={"ID":"85ea185b-86cb-4c27-855b-f27876b91a20","Type":"ContainerStarted","Data":"54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.041669 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.041998 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" podUID="85ea185b-86cb-4c27-855b-f27876b91a20" containerName="dnsmasq-dns" containerID="cri-o://54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e" gracePeriod=10 Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.046634 4770 generic.go:334] "Generic (PLEG): container finished" podID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerID="3ca7137d12b32f7b6051178da18d0d03f8153ef512d8a020eb32b20daa62c336" exitCode=0 Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.046667 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" event={"ID":"8b5ca819-c66f-45ef-93d0-acebf8e297fc","Type":"ContainerDied","Data":"3ca7137d12b32f7b6051178da18d0d03f8153ef512d8a020eb32b20daa62c336"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.046973 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" event={"ID":"8b5ca819-c66f-45ef-93d0-acebf8e297fc","Type":"ContainerStarted","Data":"0b06b1b0dd67016940d4144bb8d7103d4c05c2030facb8fe0c6a3550a9b8ac80"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.048863 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fc21963-c17e-4378-938e-200a8497203e","Type":"ContainerStarted","Data":"a2a6ab9c65f566616535203c509ef87ae79d24554b411d3d3a941136fa597546"} Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.049131 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.080467 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=45.557410175 podStartE2EDuration="1m8.080445813s" podCreationTimestamp="2025-12-09 14:43:39 +0000 UTC" firstStartedPulling="2025-12-09 14:43:45.154523264 +0000 UTC m=+1257.050725410" lastFinishedPulling="2025-12-09 14:44:07.677558912 +0000 UTC m=+1279.573761048" observedRunningTime="2025-12-09 14:44:47.072022519 +0000 UTC m=+1318.968224655" watchObservedRunningTime="2025-12-09 14:44:47.080445813 +0000 UTC m=+1318.976647949" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.090623 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=26.585121855 podStartE2EDuration="1m5.090605464s" podCreationTimestamp="2025-12-09 14:43:42 +0000 UTC" firstStartedPulling="2025-12-09 14:44:07.779969861 +0000 UTC m=+1279.676171997" lastFinishedPulling="2025-12-09 14:44:46.28545347 +0000 UTC m=+1318.181655606" observedRunningTime="2025-12-09 14:44:47.090144631 +0000 UTC m=+1318.986346777" watchObservedRunningTime="2025-12-09 14:44:47.090605464 +0000 UTC m=+1318.986807600" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.128981 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" podStartSLOduration=41.202159548 podStartE2EDuration="1m10.128961187s" podCreationTimestamp="2025-12-09 14:43:37 +0000 UTC" firstStartedPulling="2025-12-09 14:43:38.679119827 +0000 UTC m=+1250.575321963" lastFinishedPulling="2025-12-09 14:44:07.605921466 +0000 UTC m=+1279.502123602" observedRunningTime="2025-12-09 14:44:47.12506914 +0000 UTC m=+1319.021271276" watchObservedRunningTime="2025-12-09 14:44:47.128961187 +0000 UTC m=+1319.025163323" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.477178 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.576684 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-dns-svc\") pod \"85ea185b-86cb-4c27-855b-f27876b91a20\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.577198 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzs7z\" (UniqueName: \"kubernetes.io/projected/85ea185b-86cb-4c27-855b-f27876b91a20-kube-api-access-kzs7z\") pod \"85ea185b-86cb-4c27-855b-f27876b91a20\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.577348 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-config\") pod \"85ea185b-86cb-4c27-855b-f27876b91a20\" (UID: \"85ea185b-86cb-4c27-855b-f27876b91a20\") " Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.582606 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ea185b-86cb-4c27-855b-f27876b91a20-kube-api-access-kzs7z" (OuterVolumeSpecName: "kube-api-access-kzs7z") pod "85ea185b-86cb-4c27-855b-f27876b91a20" (UID: "85ea185b-86cb-4c27-855b-f27876b91a20"). InnerVolumeSpecName "kube-api-access-kzs7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.622556 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85ea185b-86cb-4c27-855b-f27876b91a20" (UID: "85ea185b-86cb-4c27-855b-f27876b91a20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.622867 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-config" (OuterVolumeSpecName: "config") pod "85ea185b-86cb-4c27-855b-f27876b91a20" (UID: "85ea185b-86cb-4c27-855b-f27876b91a20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.700696 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzs7z\" (UniqueName: \"kubernetes.io/projected/85ea185b-86cb-4c27-855b-f27876b91a20-kube-api-access-kzs7z\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.700764 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:47 crc kubenswrapper[4770]: I1209 14:44:47.700774 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ea185b-86cb-4c27-855b-f27876b91a20-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.057583 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469","Type":"ContainerStarted","Data":"d49bf7eb57ed05ebfbd4c93d645ad809edd662e415eb80a9cfd4911f513fdb00"} Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.060699 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vbhct" event={"ID":"1b62ee43-2d8d-4d9d-a333-ba36d6755816","Type":"ContainerStarted","Data":"b235b3e290b3ae8f48775feaca1808ec624cb6a8a92b6768794d4f8c659c1172"} Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.060922 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.063296 4770 generic.go:334] "Generic (PLEG): container finished" podID="85ea185b-86cb-4c27-855b-f27876b91a20" containerID="54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e" exitCode=0 Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.063357 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.063362 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" event={"ID":"85ea185b-86cb-4c27-855b-f27876b91a20","Type":"ContainerDied","Data":"54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e"} Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.063468 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8c8gb" event={"ID":"85ea185b-86cb-4c27-855b-f27876b91a20","Type":"ContainerDied","Data":"6b7f1889eaf7aec71642d7e0879ebe98d94af0bab41ba4a61201eaccf3afd907"} Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.063490 4770 scope.go:117] "RemoveContainer" containerID="54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.066365 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" event={"ID":"8b5ca819-c66f-45ef-93d0-acebf8e297fc","Type":"ContainerStarted","Data":"0fc099b8809c77e29de2c3760c198890d7165488cb015e5924a4f04ada773430"} Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.089410 4770 scope.go:117] "RemoveContainer" containerID="0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.101225 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-vbhct" podStartSLOduration=33.101202854 podStartE2EDuration="33.101202854s" podCreationTimestamp="2025-12-09 14:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:44:48.098507779 +0000 UTC m=+1319.994709915" watchObservedRunningTime="2025-12-09 14:44:48.101202854 +0000 UTC m=+1319.997405000" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.119596 4770 scope.go:117] "RemoveContainer" containerID="54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e" Dec 09 14:44:48 crc kubenswrapper[4770]: E1209 14:44:48.120052 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e\": container with ID starting with 54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e not found: ID does not exist" containerID="54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.120102 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e"} err="failed to get container status \"54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e\": rpc error: code = NotFound desc = could not find container \"54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e\": container with ID starting with 54f88fe6155c38323eef40ee066a623ccd422f5707cd4e1b2055a052a11d171e not found: ID does not exist" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.120136 4770 scope.go:117] "RemoveContainer" containerID="0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b" Dec 09 14:44:48 crc kubenswrapper[4770]: E1209 14:44:48.120487 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b\": container with ID starting with 0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b not found: ID does not exist" containerID="0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.120535 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b"} err="failed to get container status \"0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b\": rpc error: code = NotFound desc = could not find container \"0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b\": container with ID starting with 0f255dc841ab014f038d533eb3f87015c1bc102ec0c3b8a191462df10e7ca18b not found: ID does not exist" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.128224 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8c8gb"] Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.138549 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8c8gb"] Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.149822 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" podStartSLOduration=33.149803431 podStartE2EDuration="33.149803431s" podCreationTimestamp="2025-12-09 14:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:44:48.148212038 +0000 UTC m=+1320.044414174" watchObservedRunningTime="2025-12-09 14:44:48.149803431 +0000 UTC m=+1320.046005567" Dec 09 14:44:48 crc kubenswrapper[4770]: I1209 14:44:48.606774 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ea185b-86cb-4c27-855b-f27876b91a20" path="/var/lib/kubelet/pods/85ea185b-86cb-4c27-855b-f27876b91a20/volumes" Dec 09 14:44:49 crc kubenswrapper[4770]: I1209 14:44:49.075617 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:50 crc kubenswrapper[4770]: I1209 14:44:50.810394 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 09 14:44:50 crc kubenswrapper[4770]: I1209 14:44:50.811079 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 09 14:44:52 crc kubenswrapper[4770]: I1209 14:44:52.102550 4770 generic.go:334] "Generic (PLEG): container finished" podID="b8f05019-cb7b-46bb-bb57-2f8c6a9bba53" containerID="90cf8fad0fa03618e7238843b8b61a62be2c9dfa2ba43fdf0c2f57a27036729b" exitCode=0 Dec 09 14:44:52 crc kubenswrapper[4770]: I1209 14:44:52.102644 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53","Type":"ContainerDied","Data":"90cf8fad0fa03618e7238843b8b61a62be2c9dfa2ba43fdf0c2f57a27036729b"} Dec 09 14:44:52 crc kubenswrapper[4770]: I1209 14:44:52.424107 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 09 14:44:52 crc kubenswrapper[4770]: E1209 14:44:52.631140 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="3917b78f-5515-4149-82d3-96a981c77ac5" Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.115173 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8f05019-cb7b-46bb-bb57-2f8c6a9bba53","Type":"ContainerStarted","Data":"8f0bb137178b93f58027a8a3c638ed2a4ad8d3299e3a4fed262a81e0fd86b7ae"} Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.119045 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"41190237-fd6b-45b3-b68e-ad67b77ea11d","Type":"ContainerStarted","Data":"4ef6b2fb2a13f09505b3a64ef1147d1d2abe13a36c53ef30afc7bd9fa95455a8"} Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.123187 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3917b78f-5515-4149-82d3-96a981c77ac5","Type":"ContainerStarted","Data":"1d3b63df8fa35b800fdc897a6cb22f47329bffe5dd2df6e1373cc716cd8928b6"} Dec 09 14:44:53 crc kubenswrapper[4770]: E1209 14:44:53.124541 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="3917b78f-5515-4149-82d3-96a981c77ac5" Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.125549 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5v8bz" event={"ID":"c324e2da-48b3-4772-bcf4-ebb0dc2543eb","Type":"ContainerStarted","Data":"b30a2c3ae50b22d9f74f8ee752ac601edb726d4d0866fc1e7302b08c27970eef"} Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.140905 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=73.140887875 podStartE2EDuration="1m13.140887875s" podCreationTimestamp="2025-12-09 14:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:44:53.139191239 +0000 UTC m=+1325.035393365" watchObservedRunningTime="2025-12-09 14:44:53.140887875 +0000 UTC m=+1325.037090011" Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.164595 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.895276427 podStartE2EDuration="1m2.164573762s" podCreationTimestamp="2025-12-09 14:43:51 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.965839397 +0000 UTC m=+1281.862041533" lastFinishedPulling="2025-12-09 14:44:52.235136722 +0000 UTC m=+1324.131338868" observedRunningTime="2025-12-09 14:44:53.161188108 +0000 UTC m=+1325.057390264" watchObservedRunningTime="2025-12-09 14:44:53.164573762 +0000 UTC m=+1325.060775898" Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.187879 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5v8bz" podStartSLOduration=33.758350717 podStartE2EDuration="39.187858408s" podCreationTimestamp="2025-12-09 14:44:14 +0000 UTC" firstStartedPulling="2025-12-09 14:44:46.807685039 +0000 UTC m=+1318.703887175" lastFinishedPulling="2025-12-09 14:44:52.2371927 +0000 UTC m=+1324.133394866" observedRunningTime="2025-12-09 14:44:53.17750581 +0000 UTC m=+1325.073707946" watchObservedRunningTime="2025-12-09 14:44:53.187858408 +0000 UTC m=+1325.084060544" Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.474578 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.474619 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 09 14:44:53 crc kubenswrapper[4770]: I1209 14:44:53.513041 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.133879 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" event={"ID":"bf0f3c4c-bbc9-484d-8153-e12ad4118c9a","Type":"ContainerStarted","Data":"42c674bc9e733cd4bffd66ea2aec9adef4f018da18cc8a2c3957c01808b1a338"} Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.134514 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:44:54 crc kubenswrapper[4770]: E1209 14:44:54.134748 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="3917b78f-5515-4149-82d3-96a981c77ac5" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.155021 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.182323 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-rw588" podStartSLOduration=13.601763443 podStartE2EDuration="58.1823066s" podCreationTimestamp="2025-12-09 14:43:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.236545807 +0000 UTC m=+1281.132747943" lastFinishedPulling="2025-12-09 14:44:53.817088954 +0000 UTC m=+1325.713291100" observedRunningTime="2025-12-09 14:44:54.174811412 +0000 UTC m=+1326.071013548" watchObservedRunningTime="2025-12-09 14:44:54.1823066 +0000 UTC m=+1326.078508736" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.190641 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.523082 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-l5tc8"] Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.523356 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" podUID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerName="dnsmasq-dns" containerID="cri-o://0fc099b8809c77e29de2c3760c198890d7165488cb015e5924a4f04ada773430" gracePeriod=10 Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.525937 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.562862 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fw9zr"] Dec 09 14:44:54 crc kubenswrapper[4770]: E1209 14:44:54.563233 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ea185b-86cb-4c27-855b-f27876b91a20" containerName="init" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.563255 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ea185b-86cb-4c27-855b-f27876b91a20" containerName="init" Dec 09 14:44:54 crc kubenswrapper[4770]: E1209 14:44:54.563289 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ea185b-86cb-4c27-855b-f27876b91a20" containerName="dnsmasq-dns" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.563297 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ea185b-86cb-4c27-855b-f27876b91a20" containerName="dnsmasq-dns" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.563445 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ea185b-86cb-4c27-855b-f27876b91a20" containerName="dnsmasq-dns" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.564443 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.580882 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fw9zr"] Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.638180 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.638400 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.638451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsn7p\" (UniqueName: \"kubernetes.io/projected/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-kube-api-access-gsn7p\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.638473 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-config\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.638490 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.740050 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.740159 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.740209 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsn7p\" (UniqueName: \"kubernetes.io/projected/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-kube-api-access-gsn7p\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.740233 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-config\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.740253 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.741100 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.741409 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.741473 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-config\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.741816 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.761897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsn7p\" (UniqueName: \"kubernetes.io/projected/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-kube-api-access-gsn7p\") pod \"dnsmasq-dns-b8fbc5445-fw9zr\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.882643 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:44:54 crc kubenswrapper[4770]: I1209 14:44:54.958092 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.066798 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.154791 4770 generic.go:334] "Generic (PLEG): container finished" podID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerID="0fc099b8809c77e29de2c3760c198890d7165488cb015e5924a4f04ada773430" exitCode=0 Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.155025 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" event={"ID":"8b5ca819-c66f-45ef-93d0-acebf8e297fc","Type":"ContainerDied","Data":"0fc099b8809c77e29de2c3760c198890d7165488cb015e5924a4f04ada773430"} Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.395450 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" podUID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.445174 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fw9zr"] Dec 09 14:44:55 crc kubenswrapper[4770]: W1209 14:44:55.459684 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0071ecd_b5e2_4ee8_ac00_5ff90be3b57d.slice/crio-9e0e4c189d283cfee9be63ec26664b5dbeb49c24c1f9687c8ae9c49998c99516 WatchSource:0}: Error finding container 9e0e4c189d283cfee9be63ec26664b5dbeb49c24c1f9687c8ae9c49998c99516: Status 404 returned error can't find the container with id 9e0e4c189d283cfee9be63ec26664b5dbeb49c24c1f9687c8ae9c49998c99516 Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.555863 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.730159 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.739867 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.742912 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.743143 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.743281 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-j2xqw" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.743440 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.744283 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.865788 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5v8k\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-kube-api-access-v5v8k\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.865868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.865963 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b9cb26ca-feb3-471b-9be8-eb69eb8c44a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b9cb26ca-feb3-471b-9be8-eb69eb8c44a5\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.866030 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cbc15e71-9605-466b-8947-aa2ca716bc2d-cache\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.866056 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cbc15e71-9605-466b-8947-aa2ca716bc2d-lock\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.967547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b9cb26ca-feb3-471b-9be8-eb69eb8c44a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b9cb26ca-feb3-471b-9be8-eb69eb8c44a5\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.967946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cbc15e71-9605-466b-8947-aa2ca716bc2d-cache\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.968045 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cbc15e71-9605-466b-8947-aa2ca716bc2d-lock\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.968163 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5v8k\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-kube-api-access-v5v8k\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.968284 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: E1209 14:44:55.968428 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 14:44:55 crc kubenswrapper[4770]: E1209 14:44:55.968464 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 14:44:55 crc kubenswrapper[4770]: E1209 14:44:55.968523 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift podName:cbc15e71-9605-466b-8947-aa2ca716bc2d nodeName:}" failed. No retries permitted until 2025-12-09 14:44:56.468502795 +0000 UTC m=+1328.364704931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift") pod "swift-storage-0" (UID: "cbc15e71-9605-466b-8947-aa2ca716bc2d") : configmap "swift-ring-files" not found Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.968932 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cbc15e71-9605-466b-8947-aa2ca716bc2d-cache\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.968964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cbc15e71-9605-466b-8947-aa2ca716bc2d-lock\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.970349 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.970385 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b9cb26ca-feb3-471b-9be8-eb69eb8c44a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b9cb26ca-feb3-471b-9be8-eb69eb8c44a5\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9505481f7608de7059d4dd42043be40237cf9987a7d5915a58a1672263e21157/globalmount\"" pod="openstack/swift-storage-0" Dec 09 14:44:55 crc kubenswrapper[4770]: I1209 14:44:55.999414 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5v8k\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-kube-api-access-v5v8k\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.004305 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b9cb26ca-feb3-471b-9be8-eb69eb8c44a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b9cb26ca-feb3-471b-9be8-eb69eb8c44a5\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.163026 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" event={"ID":"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d","Type":"ContainerStarted","Data":"9e0e4c189d283cfee9be63ec26664b5dbeb49c24c1f9687c8ae9c49998c99516"} Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.262904 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dshr4"] Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.264264 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.266230 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.266652 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.268370 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.277408 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dshr4"] Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.377234 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b8f30831-ad4f-4009-b177-e645f911f5b4-etc-swift\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.377413 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xpf6\" (UniqueName: \"kubernetes.io/projected/b8f30831-ad4f-4009-b177-e645f911f5b4-kube-api-access-4xpf6\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.377658 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-scripts\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.377827 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-combined-ca-bundle\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.377900 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-ring-data-devices\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.378002 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-dispersionconf\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.378100 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-swiftconf\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.482937 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b8f30831-ad4f-4009-b177-e645f911f5b4-etc-swift\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.483016 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xpf6\" (UniqueName: \"kubernetes.io/projected/b8f30831-ad4f-4009-b177-e645f911f5b4-kube-api-access-4xpf6\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.483074 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-scripts\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.483139 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-combined-ca-bundle\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.483172 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.483197 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-ring-data-devices\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.483217 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-dispersionconf\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.483281 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-swiftconf\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: E1209 14:44:56.483509 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 14:44:56 crc kubenswrapper[4770]: E1209 14:44:56.483537 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 14:44:56 crc kubenswrapper[4770]: E1209 14:44:56.483585 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift podName:cbc15e71-9605-466b-8947-aa2ca716bc2d nodeName:}" failed. No retries permitted until 2025-12-09 14:44:57.483566574 +0000 UTC m=+1329.379768710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift") pod "swift-storage-0" (UID: "cbc15e71-9605-466b-8947-aa2ca716bc2d") : configmap "swift-ring-files" not found Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.483514 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b8f30831-ad4f-4009-b177-e645f911f5b4-etc-swift\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.484275 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-ring-data-devices\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.484347 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-scripts\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.488460 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-dispersionconf\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.489007 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-swiftconf\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.493993 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-combined-ca-bundle\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.509099 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xpf6\" (UniqueName: \"kubernetes.io/projected/b8f30831-ad4f-4009-b177-e645f911f5b4-kube-api-access-4xpf6\") pod \"swift-ring-rebalance-dshr4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:56 crc kubenswrapper[4770]: I1209 14:44:56.594918 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:44:57 crc kubenswrapper[4770]: I1209 14:44:57.100115 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dshr4"] Dec 09 14:44:57 crc kubenswrapper[4770]: W1209 14:44:57.108847 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8f30831_ad4f_4009_b177_e645f911f5b4.slice/crio-7c7eaf34dc65b3826f5d36183942764836f0dd992ebdf773f07d2c51fcbaf226 WatchSource:0}: Error finding container 7c7eaf34dc65b3826f5d36183942764836f0dd992ebdf773f07d2c51fcbaf226: Status 404 returned error can't find the container with id 7c7eaf34dc65b3826f5d36183942764836f0dd992ebdf773f07d2c51fcbaf226 Dec 09 14:44:57 crc kubenswrapper[4770]: I1209 14:44:57.174082 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dshr4" event={"ID":"b8f30831-ad4f-4009-b177-e645f911f5b4","Type":"ContainerStarted","Data":"7c7eaf34dc65b3826f5d36183942764836f0dd992ebdf773f07d2c51fcbaf226"} Dec 09 14:44:57 crc kubenswrapper[4770]: I1209 14:44:57.500868 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:57 crc kubenswrapper[4770]: E1209 14:44:57.501149 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 14:44:57 crc kubenswrapper[4770]: E1209 14:44:57.501185 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 14:44:57 crc kubenswrapper[4770]: E1209 14:44:57.501258 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift podName:cbc15e71-9605-466b-8947-aa2ca716bc2d nodeName:}" failed. No retries permitted until 2025-12-09 14:44:59.501235751 +0000 UTC m=+1331.397437897 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift") pod "swift-storage-0" (UID: "cbc15e71-9605-466b-8947-aa2ca716bc2d") : configmap "swift-ring-files" not found Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.194786 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" event={"ID":"754ed0cc-ec25-45d4-b0d0-907d92e939fd","Type":"ContainerStarted","Data":"a4f492598a105c85a0f06bb7689ed5fa88867e825baa67f9807840ae3db1ada9"} Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.195835 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.196419 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31bb1b14-4de1-4586-8bde-d29afdaad6fd","Type":"ContainerStarted","Data":"0124084ad1afd737a4b241af8de506900eae154a555a4d4a8ce733160ae30b59"} Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.199593 4770 generic.go:334] "Generic (PLEG): container finished" podID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" containerID="a99b06c297dc779da4bcf0e3f3d9e63fba5a71f67d9a0982debe51a3bb2bb151" exitCode=0 Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.199658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" event={"ID":"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d","Type":"ContainerDied","Data":"a99b06c297dc779da4bcf0e3f3d9e63fba5a71f67d9a0982debe51a3bb2bb151"} Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.206641 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.225207 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-86wfb" podStartSLOduration=-9223371973.629585 podStartE2EDuration="1m3.225189759s" podCreationTimestamp="2025-12-09 14:43:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.973356216 +0000 UTC m=+1281.869558352" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:44:59.218599177 +0000 UTC m=+1331.114801333" watchObservedRunningTime="2025-12-09 14:44:59.225189759 +0000 UTC m=+1331.121391895" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.369805 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.468229 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pttsm\" (UniqueName: \"kubernetes.io/projected/8b5ca819-c66f-45ef-93d0-acebf8e297fc-kube-api-access-pttsm\") pod \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.468358 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-dns-svc\") pod \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.468427 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-ovsdbserver-nb\") pod \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.468484 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-config\") pod \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\" (UID: \"8b5ca819-c66f-45ef-93d0-acebf8e297fc\") " Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.473272 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5ca819-c66f-45ef-93d0-acebf8e297fc-kube-api-access-pttsm" (OuterVolumeSpecName: "kube-api-access-pttsm") pod "8b5ca819-c66f-45ef-93d0-acebf8e297fc" (UID: "8b5ca819-c66f-45ef-93d0-acebf8e297fc"). InnerVolumeSpecName "kube-api-access-pttsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.517196 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b5ca819-c66f-45ef-93d0-acebf8e297fc" (UID: "8b5ca819-c66f-45ef-93d0-acebf8e297fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.543032 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b5ca819-c66f-45ef-93d0-acebf8e297fc" (UID: "8b5ca819-c66f-45ef-93d0-acebf8e297fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.552973 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-config" (OuterVolumeSpecName: "config") pod "8b5ca819-c66f-45ef-93d0-acebf8e297fc" (UID: "8b5ca819-c66f-45ef-93d0-acebf8e297fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.570674 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:44:59 crc kubenswrapper[4770]: E1209 14:44:59.570872 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 14:44:59 crc kubenswrapper[4770]: E1209 14:44:59.570895 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 14:44:59 crc kubenswrapper[4770]: E1209 14:44:59.570944 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift podName:cbc15e71-9605-466b-8947-aa2ca716bc2d nodeName:}" failed. No retries permitted until 2025-12-09 14:45:03.570928735 +0000 UTC m=+1335.467130871 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift") pod "swift-storage-0" (UID: "cbc15e71-9605-466b-8947-aa2ca716bc2d") : configmap "swift-ring-files" not found Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.570965 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.570984 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.570998 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5ca819-c66f-45ef-93d0-acebf8e297fc-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:44:59 crc kubenswrapper[4770]: I1209 14:44:59.571010 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pttsm\" (UniqueName: \"kubernetes.io/projected/8b5ca819-c66f-45ef-93d0-acebf8e297fc-kube-api-access-pttsm\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.161387 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws"] Dec 09 14:45:00 crc kubenswrapper[4770]: E1209 14:45:00.162220 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerName="init" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.162241 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerName="init" Dec 09 14:45:00 crc kubenswrapper[4770]: E1209 14:45:00.162268 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerName="dnsmasq-dns" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.162277 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerName="dnsmasq-dns" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.162523 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" containerName="dnsmasq-dns" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.163404 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.163859 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws"] Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.169089 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.169173 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.186848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7044d2d-0426-4e15-ae14-5734939f8884-config-volume\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.186995 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbqw\" (UniqueName: \"kubernetes.io/projected/b7044d2d-0426-4e15-ae14-5734939f8884-kube-api-access-rzbqw\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.187031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7044d2d-0426-4e15-ae14-5734939f8884-secret-volume\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.216928 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" event={"ID":"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d","Type":"ContainerStarted","Data":"3c35eac63a40b9ced8754d68bc84fb6212067662778f4f8b37fc4f134acfd619"} Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.217002 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.218792 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" event={"ID":"3d8fd93c-ff55-4b03-9024-52af60e3e632","Type":"ContainerStarted","Data":"3579146e1997db7d2b4f63094a24b4f92582a3cc9f8148e6997eb64224c46109"} Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.219658 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.223158 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.223302 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-l5tc8" event={"ID":"8b5ca819-c66f-45ef-93d0-acebf8e297fc","Type":"ContainerDied","Data":"0b06b1b0dd67016940d4144bb8d7103d4c05c2030facb8fe0c6a3550a9b8ac80"} Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.223375 4770 scope.go:117] "RemoveContainer" containerID="0fc099b8809c77e29de2c3760c198890d7165488cb015e5924a4f04ada773430" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.238374 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" podStartSLOduration=6.238355341 podStartE2EDuration="6.238355341s" podCreationTimestamp="2025-12-09 14:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:00.236201361 +0000 UTC m=+1332.132403497" watchObservedRunningTime="2025-12-09 14:45:00.238355341 +0000 UTC m=+1332.134557477" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.262328 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" podStartSLOduration=14.268675205 podStartE2EDuration="1m4.262311335s" podCreationTimestamp="2025-12-09 14:43:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.234374937 +0000 UTC m=+1281.130577073" lastFinishedPulling="2025-12-09 14:44:59.228011067 +0000 UTC m=+1331.124213203" observedRunningTime="2025-12-09 14:45:00.259902438 +0000 UTC m=+1332.156104574" watchObservedRunningTime="2025-12-09 14:45:00.262311335 +0000 UTC m=+1332.158513471" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.288899 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7044d2d-0426-4e15-ae14-5734939f8884-config-volume\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.289248 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzbqw\" (UniqueName: \"kubernetes.io/projected/b7044d2d-0426-4e15-ae14-5734939f8884-kube-api-access-rzbqw\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.289299 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7044d2d-0426-4e15-ae14-5734939f8884-secret-volume\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.289861 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7044d2d-0426-4e15-ae14-5734939f8884-config-volume\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.354565 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7044d2d-0426-4e15-ae14-5734939f8884-secret-volume\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.354827 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzbqw\" (UniqueName: \"kubernetes.io/projected/b7044d2d-0426-4e15-ae14-5734939f8884-kube-api-access-rzbqw\") pod \"collect-profiles-29421525-krtws\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.495768 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.498677 4770 scope.go:117] "RemoveContainer" containerID="3ca7137d12b32f7b6051178da18d0d03f8153ef512d8a020eb32b20daa62c336" Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.538701 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-l5tc8"] Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.550675 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-l5tc8"] Dec 09 14:45:00 crc kubenswrapper[4770]: I1209 14:45:00.611768 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5ca819-c66f-45ef-93d0-acebf8e297fc" path="/var/lib/kubelet/pods/8b5ca819-c66f-45ef-93d0-acebf8e297fc/volumes" Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.116940 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws"] Dec 09 14:45:01 crc kubenswrapper[4770]: W1209 14:45:01.121092 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7044d2d_0426_4e15_ae14_5734939f8884.slice/crio-9809d15e409f64c8c15a81d4e8e0c0ca10e7f4d94d9585b44ce67337f82156ad WatchSource:0}: Error finding container 9809d15e409f64c8c15a81d4e8e0c0ca10e7f4d94d9585b44ce67337f82156ad: Status 404 returned error can't find the container with id 9809d15e409f64c8c15a81d4e8e0c0ca10e7f4d94d9585b44ce67337f82156ad Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.232813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" event={"ID":"aeb389cf-bc24-4200-8561-a3c804f1d8c0","Type":"ContainerStarted","Data":"67ce6d0c4bf47ac961513aa6a50c5222a303aedd0dc543572c39ecdcbea6dd0d"} Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.233019 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.236308 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"24f02fdc-5866-4325-8d48-1333cd9a33d9","Type":"ContainerStarted","Data":"926d818598b9ce3708f24029d43b70fb0a3428bb257a8baa76d3bc0bd0f7304f"} Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.236953 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.240393 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" event={"ID":"b7044d2d-0426-4e15-ae14-5734939f8884","Type":"ContainerStarted","Data":"9809d15e409f64c8c15a81d4e8e0c0ca10e7f4d94d9585b44ce67337f82156ad"} Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.244023 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ng4w" event={"ID":"f087e8e2-8532-4abc-925b-574ebb448bde","Type":"ContainerStarted","Data":"a22795465125c6142628dbbd793d5237eedc43f2338e4607ab0810aefc4c0487"} Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.257249 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" podStartSLOduration=14.550885706 podStartE2EDuration="1m5.257230489s" podCreationTimestamp="2025-12-09 14:43:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.246572914 +0000 UTC m=+1281.142775050" lastFinishedPulling="2025-12-09 14:44:59.952917697 +0000 UTC m=+1331.849119833" observedRunningTime="2025-12-09 14:45:01.25039887 +0000 UTC m=+1333.146601016" watchObservedRunningTime="2025-12-09 14:45:01.257230489 +0000 UTC m=+1333.153432625" Dec 09 14:45:01 crc kubenswrapper[4770]: I1209 14:45:01.275332 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=15.304908153 podStartE2EDuration="1m5.27531454s" podCreationTimestamp="2025-12-09 14:43:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.970873977 +0000 UTC m=+1281.867076113" lastFinishedPulling="2025-12-09 14:44:59.941280374 +0000 UTC m=+1331.837482500" observedRunningTime="2025-12-09 14:45:01.265358404 +0000 UTC m=+1333.161560560" watchObservedRunningTime="2025-12-09 14:45:01.27531454 +0000 UTC m=+1333.171516676" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.055781 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2aa9-account-create-update-9b55f"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.058100 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.061017 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.257119 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-k7qqf"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.258408 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.261481 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" event={"ID":"b7044d2d-0426-4e15-ae14-5734939f8884","Type":"ContainerStarted","Data":"7de7e0cb63961f34d9bdfc63ec82990abbabbfd07fb49824546ce2f6ddfc6d36"} Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.264203 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"1ac324ef-d65f-421c-b382-9c321ae7d447","Type":"ContainerStarted","Data":"50ae320a9626f8bac6155926c5a40c132e8f8e8914e389eb9f34fc2fbae5861e"} Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.265484 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.267614 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-slxb5" event={"ID":"8eebc5e5-e737-4171-abed-1e04fa89b0b4","Type":"ContainerStarted","Data":"36a8bbf60ac92af6930f9458c8202528ac0155b6064185c69f0cc2827bca19cb"} Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.268267 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-slxb5" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.269164 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-k7qqf"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.272192 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95b1d2b0-6b25-4853-aae2-9cdc30773854","Type":"ContainerStarted","Data":"41526fda3f0c20637e90ec76b311210a3f964fc0aa8f7ee58559a48c1a5ccee1"} Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.272919 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.275815 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"67dab40a-3d7c-4737-bca9-28dc6280071c","Type":"ContainerStarted","Data":"e6294ff9df1a42596c0ca15d87f1bc1438473a73d9617196ad84a84c6fed3ac5"} Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.276151 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.278030 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" event={"ID":"dc16ff55-b814-4912-842a-2744c0450b51","Type":"ContainerStarted","Data":"2edb375c95928f33e563c39718052aaf79d7887c32eaf33288dfc14b0f2f0ce3"} Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.278416 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.310904 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2aa9-account-create-update-9b55f"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.333944 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bceb3d-b7f4-43f6-bb98-834984940e5b-operator-scripts\") pod \"keystone-2aa9-account-create-update-9b55f\" (UID: \"29bceb3d-b7f4-43f6-bb98-834984940e5b\") " pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.334011 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc7mn\" (UniqueName: \"kubernetes.io/projected/29bceb3d-b7f4-43f6-bb98-834984940e5b-kube-api-access-hc7mn\") pod \"keystone-2aa9-account-create-update-9b55f\" (UID: \"29bceb3d-b7f4-43f6-bb98-834984940e5b\") " pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.335457 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=26.024285956 podStartE2EDuration="1m18.335439683s" podCreationTimestamp="2025-12-09 14:43:44 +0000 UTC" firstStartedPulling="2025-12-09 14:44:07.590433856 +0000 UTC m=+1279.486636002" lastFinishedPulling="2025-12-09 14:44:59.901587593 +0000 UTC m=+1331.797789729" observedRunningTime="2025-12-09 14:45:02.333998154 +0000 UTC m=+1334.230200290" watchObservedRunningTime="2025-12-09 14:45:02.335439683 +0000 UTC m=+1334.231641819" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.362997 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=-9223371970.491798 podStartE2EDuration="1m6.362977367s" podCreationTimestamp="2025-12-09 14:43:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.961547089 +0000 UTC m=+1281.857749245" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:02.349360959 +0000 UTC m=+1334.245563095" watchObservedRunningTime="2025-12-09 14:45:02.362977367 +0000 UTC m=+1334.259179503" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.382715 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-slxb5" podStartSLOduration=23.705804818 podStartE2EDuration="1m14.382698463s" podCreationTimestamp="2025-12-09 14:43:48 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.236593078 +0000 UTC m=+1281.132795214" lastFinishedPulling="2025-12-09 14:44:59.913486723 +0000 UTC m=+1331.809688859" observedRunningTime="2025-12-09 14:45:02.373367045 +0000 UTC m=+1334.269569181" watchObservedRunningTime="2025-12-09 14:45:02.382698463 +0000 UTC m=+1334.278900599" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.411190 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-54p89"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.412811 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-54p89" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.421694 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2215-account-create-update-wnhj4"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.423086 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.424904 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.489376 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc7mn\" (UniqueName: \"kubernetes.io/projected/29bceb3d-b7f4-43f6-bb98-834984940e5b-kube-api-access-hc7mn\") pod \"keystone-2aa9-account-create-update-9b55f\" (UID: \"29bceb3d-b7f4-43f6-bb98-834984940e5b\") " pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.489805 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf370f4-55f9-49a5-86a8-751c2b6ff94d-operator-scripts\") pod \"keystone-db-create-k7qqf\" (UID: \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\") " pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.489934 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n77x\" (UniqueName: \"kubernetes.io/projected/faf370f4-55f9-49a5-86a8-751c2b6ff94d-kube-api-access-7n77x\") pod \"keystone-db-create-k7qqf\" (UID: \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\") " pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.490061 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bceb3d-b7f4-43f6-bb98-834984940e5b-operator-scripts\") pod \"keystone-2aa9-account-create-update-9b55f\" (UID: \"29bceb3d-b7f4-43f6-bb98-834984940e5b\") " pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.491325 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" podStartSLOduration=2.4913070250000002 podStartE2EDuration="2.491307025s" podCreationTimestamp="2025-12-09 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:02.410543806 +0000 UTC m=+1334.306745942" watchObservedRunningTime="2025-12-09 14:45:02.491307025 +0000 UTC m=+1334.387509161" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.498255 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bceb3d-b7f4-43f6-bb98-834984940e5b-operator-scripts\") pod \"keystone-2aa9-account-create-update-9b55f\" (UID: \"29bceb3d-b7f4-43f6-bb98-834984940e5b\") " pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.510261 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-54p89"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.520460 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2215-account-create-update-wnhj4"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.528059 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" podStartSLOduration=-9223371970.32673 podStartE2EDuration="1m6.528044593s" podCreationTimestamp="2025-12-09 14:43:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.243080088 +0000 UTC m=+1281.139282224" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:02.491074729 +0000 UTC m=+1334.387276865" watchObservedRunningTime="2025-12-09 14:45:02.528044593 +0000 UTC m=+1334.424246729" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.530600 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=15.529816459 podStartE2EDuration="1m6.530592174s" podCreationTimestamp="2025-12-09 14:43:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.236562037 +0000 UTC m=+1281.132764173" lastFinishedPulling="2025-12-09 14:45:00.237337752 +0000 UTC m=+1332.133539888" observedRunningTime="2025-12-09 14:45:02.514371654 +0000 UTC m=+1334.410573790" watchObservedRunningTime="2025-12-09 14:45:02.530592174 +0000 UTC m=+1334.426794310" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.555376 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc7mn\" (UniqueName: \"kubernetes.io/projected/29bceb3d-b7f4-43f6-bb98-834984940e5b-kube-api-access-hc7mn\") pod \"keystone-2aa9-account-create-update-9b55f\" (UID: \"29bceb3d-b7f4-43f6-bb98-834984940e5b\") " pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.559757 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.591972 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx8qj\" (UniqueName: \"kubernetes.io/projected/d54c3930-0be3-4634-b250-921a22df3263-kube-api-access-bx8qj\") pod \"placement-2215-account-create-update-wnhj4\" (UID: \"d54c3930-0be3-4634-b250-921a22df3263\") " pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.592059 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c3930-0be3-4634-b250-921a22df3263-operator-scripts\") pod \"placement-2215-account-create-update-wnhj4\" (UID: \"d54c3930-0be3-4634-b250-921a22df3263\") " pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.592138 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7c967-d86d-4651-adf0-c7e2bc3eb428-operator-scripts\") pod \"placement-db-create-54p89\" (UID: \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\") " pod="openstack/placement-db-create-54p89" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.592193 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528gh\" (UniqueName: \"kubernetes.io/projected/82c7c967-d86d-4651-adf0-c7e2bc3eb428-kube-api-access-528gh\") pod \"placement-db-create-54p89\" (UID: \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\") " pod="openstack/placement-db-create-54p89" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.592240 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf370f4-55f9-49a5-86a8-751c2b6ff94d-operator-scripts\") pod \"keystone-db-create-k7qqf\" (UID: \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\") " pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.592295 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n77x\" (UniqueName: \"kubernetes.io/projected/faf370f4-55f9-49a5-86a8-751c2b6ff94d-kube-api-access-7n77x\") pod \"keystone-db-create-k7qqf\" (UID: \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\") " pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.593130 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf370f4-55f9-49a5-86a8-751c2b6ff94d-operator-scripts\") pod \"keystone-db-create-k7qqf\" (UID: \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\") " pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.610500 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n77x\" (UniqueName: \"kubernetes.io/projected/faf370f4-55f9-49a5-86a8-751c2b6ff94d-kube-api-access-7n77x\") pod \"keystone-db-create-k7qqf\" (UID: \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\") " pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.694066 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7c967-d86d-4651-adf0-c7e2bc3eb428-operator-scripts\") pod \"placement-db-create-54p89\" (UID: \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\") " pod="openstack/placement-db-create-54p89" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.694402 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-528gh\" (UniqueName: \"kubernetes.io/projected/82c7c967-d86d-4651-adf0-c7e2bc3eb428-kube-api-access-528gh\") pod \"placement-db-create-54p89\" (UID: \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\") " pod="openstack/placement-db-create-54p89" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.694571 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx8qj\" (UniqueName: \"kubernetes.io/projected/d54c3930-0be3-4634-b250-921a22df3263-kube-api-access-bx8qj\") pod \"placement-2215-account-create-update-wnhj4\" (UID: \"d54c3930-0be3-4634-b250-921a22df3263\") " pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.694672 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c3930-0be3-4634-b250-921a22df3263-operator-scripts\") pod \"placement-2215-account-create-update-wnhj4\" (UID: \"d54c3930-0be3-4634-b250-921a22df3263\") " pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.695654 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c3930-0be3-4634-b250-921a22df3263-operator-scripts\") pod \"placement-2215-account-create-update-wnhj4\" (UID: \"d54c3930-0be3-4634-b250-921a22df3263\") " pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.695879 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7c967-d86d-4651-adf0-c7e2bc3eb428-operator-scripts\") pod \"placement-db-create-54p89\" (UID: \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\") " pod="openstack/placement-db-create-54p89" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.717269 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-528gh\" (UniqueName: \"kubernetes.io/projected/82c7c967-d86d-4651-adf0-c7e2bc3eb428-kube-api-access-528gh\") pod \"placement-db-create-54p89\" (UID: \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\") " pod="openstack/placement-db-create-54p89" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.724248 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx8qj\" (UniqueName: \"kubernetes.io/projected/d54c3930-0be3-4634-b250-921a22df3263-kube-api-access-bx8qj\") pod \"placement-2215-account-create-update-wnhj4\" (UID: \"d54c3930-0be3-4634-b250-921a22df3263\") " pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.805865 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tdpdx"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.806900 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.806923 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.806935 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tdpdx"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.806947 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-695f-account-create-update-kfmvh"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.807135 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.807670 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-695f-account-create-update-kfmvh"] Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.807785 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.809697 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.891902 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.897170 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-operator-scripts\") pod \"glance-db-create-tdpdx\" (UID: \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\") " pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.897204 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491157c5-64af-4a59-8e9f-59695e2d7b6c-operator-scripts\") pod \"glance-695f-account-create-update-kfmvh\" (UID: \"491157c5-64af-4a59-8e9f-59695e2d7b6c\") " pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.897297 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59zd9\" (UniqueName: \"kubernetes.io/projected/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-kube-api-access-59zd9\") pod \"glance-db-create-tdpdx\" (UID: \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\") " pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.897414 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7qm\" (UniqueName: \"kubernetes.io/projected/491157c5-64af-4a59-8e9f-59695e2d7b6c-kube-api-access-mb7qm\") pod \"glance-695f-account-create-update-kfmvh\" (UID: \"491157c5-64af-4a59-8e9f-59695e2d7b6c\") " pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.904584 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-54p89" Dec 09 14:45:02 crc kubenswrapper[4770]: I1209 14:45:02.917643 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:02.999238 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491157c5-64af-4a59-8e9f-59695e2d7b6c-operator-scripts\") pod \"glance-695f-account-create-update-kfmvh\" (UID: \"491157c5-64af-4a59-8e9f-59695e2d7b6c\") " pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:02.999355 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59zd9\" (UniqueName: \"kubernetes.io/projected/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-kube-api-access-59zd9\") pod \"glance-db-create-tdpdx\" (UID: \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\") " pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:02.999466 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7qm\" (UniqueName: \"kubernetes.io/projected/491157c5-64af-4a59-8e9f-59695e2d7b6c-kube-api-access-mb7qm\") pod \"glance-695f-account-create-update-kfmvh\" (UID: \"491157c5-64af-4a59-8e9f-59695e2d7b6c\") " pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:02.999516 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-operator-scripts\") pod \"glance-db-create-tdpdx\" (UID: \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\") " pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.000045 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491157c5-64af-4a59-8e9f-59695e2d7b6c-operator-scripts\") pod \"glance-695f-account-create-update-kfmvh\" (UID: \"491157c5-64af-4a59-8e9f-59695e2d7b6c\") " pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.000185 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-operator-scripts\") pod \"glance-db-create-tdpdx\" (UID: \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\") " pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.152178 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7qm\" (UniqueName: \"kubernetes.io/projected/491157c5-64af-4a59-8e9f-59695e2d7b6c-kube-api-access-mb7qm\") pod \"glance-695f-account-create-update-kfmvh\" (UID: \"491157c5-64af-4a59-8e9f-59695e2d7b6c\") " pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.156116 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59zd9\" (UniqueName: \"kubernetes.io/projected/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-kube-api-access-59zd9\") pod \"glance-db-create-tdpdx\" (UID: \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\") " pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.190537 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.226199 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.233608 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.288908 4770 generic.go:334] "Generic (PLEG): container finished" podID="b7044d2d-0426-4e15-ae14-5734939f8884" containerID="7de7e0cb63961f34d9bdfc63ec82990abbabbfd07fb49824546ce2f6ddfc6d36" exitCode=0 Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.288979 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" event={"ID":"b7044d2d-0426-4e15-ae14-5734939f8884","Type":"ContainerDied","Data":"7de7e0cb63961f34d9bdfc63ec82990abbabbfd07fb49824546ce2f6ddfc6d36"} Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.291390 4770 generic.go:334] "Generic (PLEG): container finished" podID="f087e8e2-8532-4abc-925b-574ebb448bde" containerID="a22795465125c6142628dbbd793d5237eedc43f2338e4607ab0810aefc4c0487" exitCode=0 Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.292465 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ng4w" event={"ID":"f087e8e2-8532-4abc-925b-574ebb448bde","Type":"ContainerDied","Data":"a22795465125c6142628dbbd793d5237eedc43f2338e4607ab0810aefc4c0487"} Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.377006 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 09 14:45:03 crc kubenswrapper[4770]: I1209 14:45:03.612798 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:45:03 crc kubenswrapper[4770]: E1209 14:45:03.612999 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 14:45:03 crc kubenswrapper[4770]: E1209 14:45:03.613022 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 14:45:03 crc kubenswrapper[4770]: E1209 14:45:03.613076 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift podName:cbc15e71-9605-466b-8947-aa2ca716bc2d nodeName:}" failed. No retries permitted until 2025-12-09 14:45:11.613060356 +0000 UTC m=+1343.509262492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift") pod "swift-storage-0" (UID: "cbc15e71-9605-466b-8947-aa2ca716bc2d") : configmap "swift-ring-files" not found Dec 09 14:45:04 crc kubenswrapper[4770]: I1209 14:45:04.304162 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerStarted","Data":"a9a595d8db596474a74b7145ea47c3669fe97865184fa19856a0ba99f5f74215"} Dec 09 14:45:04 crc kubenswrapper[4770]: I1209 14:45:04.308936 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"08118160-2e03-4319-97ed-051b92b14c1e","Type":"ContainerStarted","Data":"14ed4cfe50efb1e05cdce48b9d0f4834401ae088ae50e48f0e309a03a2b4cd51"} Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.114181 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.249837 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzbqw\" (UniqueName: \"kubernetes.io/projected/b7044d2d-0426-4e15-ae14-5734939f8884-kube-api-access-rzbqw\") pod \"b7044d2d-0426-4e15-ae14-5734939f8884\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.249934 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7044d2d-0426-4e15-ae14-5734939f8884-config-volume\") pod \"b7044d2d-0426-4e15-ae14-5734939f8884\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.249974 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7044d2d-0426-4e15-ae14-5734939f8884-secret-volume\") pod \"b7044d2d-0426-4e15-ae14-5734939f8884\" (UID: \"b7044d2d-0426-4e15-ae14-5734939f8884\") " Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.251379 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7044d2d-0426-4e15-ae14-5734939f8884-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7044d2d-0426-4e15-ae14-5734939f8884" (UID: "b7044d2d-0426-4e15-ae14-5734939f8884"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.254812 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7044d2d-0426-4e15-ae14-5734939f8884-kube-api-access-rzbqw" (OuterVolumeSpecName: "kube-api-access-rzbqw") pod "b7044d2d-0426-4e15-ae14-5734939f8884" (UID: "b7044d2d-0426-4e15-ae14-5734939f8884"). InnerVolumeSpecName "kube-api-access-rzbqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.255510 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7044d2d-0426-4e15-ae14-5734939f8884-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7044d2d-0426-4e15-ae14-5734939f8884" (UID: "b7044d2d-0426-4e15-ae14-5734939f8884"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.325219 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" event={"ID":"b7044d2d-0426-4e15-ae14-5734939f8884","Type":"ContainerDied","Data":"9809d15e409f64c8c15a81d4e8e0c0ca10e7f4d94d9585b44ce67337f82156ad"} Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.325520 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9809d15e409f64c8c15a81d4e8e0c0ca10e7f4d94d9585b44ce67337f82156ad" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.325302 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.352313 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzbqw\" (UniqueName: \"kubernetes.io/projected/b7044d2d-0426-4e15-ae14-5734939f8884-kube-api-access-rzbqw\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.352337 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7044d2d-0426-4e15-ae14-5734939f8884-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.352346 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7044d2d-0426-4e15-ae14-5734939f8884-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.442364 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2215-account-create-update-wnhj4"] Dec 09 14:45:05 crc kubenswrapper[4770]: W1209 14:45:05.447203 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd54c3930_0be3_4634_b250_921a22df3263.slice/crio-508295ea5f24d937ca739e3448202edb57fdbcbcd64bd6b98bccb522201e1b56 WatchSource:0}: Error finding container 508295ea5f24d937ca739e3448202edb57fdbcbcd64bd6b98bccb522201e1b56: Status 404 returned error can't find the container with id 508295ea5f24d937ca739e3448202edb57fdbcbcd64bd6b98bccb522201e1b56 Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.564824 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2aa9-account-create-update-9b55f"] Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.572656 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-695f-account-create-update-kfmvh"] Dec 09 14:45:05 crc kubenswrapper[4770]: W1209 14:45:05.572846 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29bceb3d_b7f4_43f6_bb98_834984940e5b.slice/crio-acf4c2e5d4faa13edbbc835524c64d9fa300fd60740348ad56a64fb4c3ea0f95 WatchSource:0}: Error finding container acf4c2e5d4faa13edbbc835524c64d9fa300fd60740348ad56a64fb4c3ea0f95: Status 404 returned error can't find the container with id acf4c2e5d4faa13edbbc835524c64d9fa300fd60740348ad56a64fb4c3ea0f95 Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.594607 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-k7qqf"] Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.738482 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-54p89"] Dec 09 14:45:05 crc kubenswrapper[4770]: I1209 14:45:05.748350 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tdpdx"] Dec 09 14:45:05 crc kubenswrapper[4770]: W1209 14:45:05.753876 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11c8e0c9_b0a4_4dde_a79a_db5fa371e37c.slice/crio-c7b33c5aa116539094202886e89dcdf129ffe2ffd5e5261c95e178a0de753b99 WatchSource:0}: Error finding container c7b33c5aa116539094202886e89dcdf129ffe2ffd5e5261c95e178a0de753b99: Status 404 returned error can't find the container with id c7b33c5aa116539094202886e89dcdf129ffe2ffd5e5261c95e178a0de753b99 Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.334345 4770 generic.go:334] "Generic (PLEG): container finished" podID="faf370f4-55f9-49a5-86a8-751c2b6ff94d" containerID="42e27446d2328b3528785983be28da151de673b32df703921db257ccb5e30920" exitCode=0 Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.334499 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k7qqf" event={"ID":"faf370f4-55f9-49a5-86a8-751c2b6ff94d","Type":"ContainerDied","Data":"42e27446d2328b3528785983be28da151de673b32df703921db257ccb5e30920"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.335846 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k7qqf" event={"ID":"faf370f4-55f9-49a5-86a8-751c2b6ff94d","Type":"ContainerStarted","Data":"eaa85e26a7afb15b9d1abfa5a20560df7db365a5a8ec6cd5387c2537a9db33e5"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.337490 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tdpdx" event={"ID":"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c","Type":"ContainerStarted","Data":"e40961b4fca2b0e643e0591d89c5f0cd204691d8585c0aa54352af0043056fdb"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.337533 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tdpdx" event={"ID":"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c","Type":"ContainerStarted","Data":"c7b33c5aa116539094202886e89dcdf129ffe2ffd5e5261c95e178a0de753b99"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.339098 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dshr4" event={"ID":"b8f30831-ad4f-4009-b177-e645f911f5b4","Type":"ContainerStarted","Data":"adb65e1a987e40c8f3c2ef099964e0926fe0f5c9362de5861e32b9695a4ca187"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.340458 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-695f-account-create-update-kfmvh" event={"ID":"491157c5-64af-4a59-8e9f-59695e2d7b6c","Type":"ContainerStarted","Data":"f6467f06d4bbdd125436c8d1274f12324c97dc5763335c7143ce89133fb7c9a4"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.340489 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-695f-account-create-update-kfmvh" event={"ID":"491157c5-64af-4a59-8e9f-59695e2d7b6c","Type":"ContainerStarted","Data":"ca7c5fd84b46a2a6da3946d9f8e24807f6c16eb59759102f83f2eef7a030ffad"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.341819 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2aa9-account-create-update-9b55f" event={"ID":"29bceb3d-b7f4-43f6-bb98-834984940e5b","Type":"ContainerStarted","Data":"7e60a59376694e1847b9543a778045b77d4cc3a3743f8bfa2d638514e5ceb6d8"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.341980 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2aa9-account-create-update-9b55f" event={"ID":"29bceb3d-b7f4-43f6-bb98-834984940e5b","Type":"ContainerStarted","Data":"acf4c2e5d4faa13edbbc835524c64d9fa300fd60740348ad56a64fb4c3ea0f95"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.344315 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ng4w" event={"ID":"f087e8e2-8532-4abc-925b-574ebb448bde","Type":"ContainerStarted","Data":"60c54e2557b2f598d0b0893182da7aee3a3ab444a2ac5648b03795fef7aaeace"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.344541 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ng4w" event={"ID":"f087e8e2-8532-4abc-925b-574ebb448bde","Type":"ContainerStarted","Data":"1e4faf0e45d72c47f2ff83837478201fbfbd493dd31337bf5be354bc0345ff20"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.344752 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.344783 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.349109 4770 generic.go:334] "Generic (PLEG): container finished" podID="d54c3930-0be3-4634-b250-921a22df3263" containerID="8ce09b6881e3c0c16c007d1a1c573106fc46a25f30c41ec8f657cc3d0f29011f" exitCode=0 Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.349165 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2215-account-create-update-wnhj4" event={"ID":"d54c3930-0be3-4634-b250-921a22df3263","Type":"ContainerDied","Data":"8ce09b6881e3c0c16c007d1a1c573106fc46a25f30c41ec8f657cc3d0f29011f"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.349189 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2215-account-create-update-wnhj4" event={"ID":"d54c3930-0be3-4634-b250-921a22df3263","Type":"ContainerStarted","Data":"508295ea5f24d937ca739e3448202edb57fdbcbcd64bd6b98bccb522201e1b56"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.350740 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-54p89" event={"ID":"82c7c967-d86d-4651-adf0-c7e2bc3eb428","Type":"ContainerStarted","Data":"e4ba7f180a93ca8c3d8d545b39346716c5e28825b22a4f8aafdd43d4268cf142"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.350771 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-54p89" event={"ID":"82c7c967-d86d-4651-adf0-c7e2bc3eb428","Type":"ContainerStarted","Data":"c7e87ba5a3c332af8c637f8960dce98b88fa9c8d366e57ec9854b75fbe5849be"} Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.363942 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-2aa9-account-create-update-9b55f" podStartSLOduration=4.363926557 podStartE2EDuration="4.363926557s" podCreationTimestamp="2025-12-09 14:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:06.36079348 +0000 UTC m=+1338.256995626" watchObservedRunningTime="2025-12-09 14:45:06.363926557 +0000 UTC m=+1338.260128693" Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.385549 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-695f-account-create-update-kfmvh" podStartSLOduration=4.385528066 podStartE2EDuration="4.385528066s" podCreationTimestamp="2025-12-09 14:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:06.379942381 +0000 UTC m=+1338.276144517" watchObservedRunningTime="2025-12-09 14:45:06.385528066 +0000 UTC m=+1338.281730202" Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.403358 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5ng4w" podStartSLOduration=28.46890018 podStartE2EDuration="1m18.40333994s" podCreationTimestamp="2025-12-09 14:43:48 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.836853211 +0000 UTC m=+1281.733055347" lastFinishedPulling="2025-12-09 14:44:59.771292971 +0000 UTC m=+1331.667495107" observedRunningTime="2025-12-09 14:45:06.401183601 +0000 UTC m=+1338.297385737" watchObservedRunningTime="2025-12-09 14:45:06.40333994 +0000 UTC m=+1338.299542076" Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.426182 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-tdpdx" podStartSLOduration=4.426162493 podStartE2EDuration="4.426162493s" podCreationTimestamp="2025-12-09 14:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:06.416823794 +0000 UTC m=+1338.313025930" watchObservedRunningTime="2025-12-09 14:45:06.426162493 +0000 UTC m=+1338.322364639" Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.450141 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dshr4" podStartSLOduration=2.418076462 podStartE2EDuration="10.450122727s" podCreationTimestamp="2025-12-09 14:44:56 +0000 UTC" firstStartedPulling="2025-12-09 14:44:57.111944468 +0000 UTC m=+1329.008146604" lastFinishedPulling="2025-12-09 14:45:05.143990733 +0000 UTC m=+1337.040192869" observedRunningTime="2025-12-09 14:45:06.435697477 +0000 UTC m=+1338.331899613" watchObservedRunningTime="2025-12-09 14:45:06.450122727 +0000 UTC m=+1338.346324863" Dec 09 14:45:06 crc kubenswrapper[4770]: I1209 14:45:06.452263 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-54p89" podStartSLOduration=4.452254987 podStartE2EDuration="4.452254987s" podCreationTimestamp="2025-12-09 14:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:06.449191902 +0000 UTC m=+1338.345394048" watchObservedRunningTime="2025-12-09 14:45:06.452254987 +0000 UTC m=+1338.348457123" Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.379484 4770 generic.go:334] "Generic (PLEG): container finished" podID="82c7c967-d86d-4651-adf0-c7e2bc3eb428" containerID="e4ba7f180a93ca8c3d8d545b39346716c5e28825b22a4f8aafdd43d4268cf142" exitCode=0 Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.379844 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-54p89" event={"ID":"82c7c967-d86d-4651-adf0-c7e2bc3eb428","Type":"ContainerDied","Data":"e4ba7f180a93ca8c3d8d545b39346716c5e28825b22a4f8aafdd43d4268cf142"} Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.381458 4770 generic.go:334] "Generic (PLEG): container finished" podID="11c8e0c9-b0a4-4dde-a79a-db5fa371e37c" containerID="e40961b4fca2b0e643e0591d89c5f0cd204691d8585c0aa54352af0043056fdb" exitCode=0 Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.381521 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tdpdx" event={"ID":"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c","Type":"ContainerDied","Data":"e40961b4fca2b0e643e0591d89c5f0cd204691d8585c0aa54352af0043056fdb"} Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.383104 4770 generic.go:334] "Generic (PLEG): container finished" podID="491157c5-64af-4a59-8e9f-59695e2d7b6c" containerID="f6467f06d4bbdd125436c8d1274f12324c97dc5763335c7143ce89133fb7c9a4" exitCode=0 Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.383158 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-695f-account-create-update-kfmvh" event={"ID":"491157c5-64af-4a59-8e9f-59695e2d7b6c","Type":"ContainerDied","Data":"f6467f06d4bbdd125436c8d1274f12324c97dc5763335c7143ce89133fb7c9a4"} Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.385443 4770 generic.go:334] "Generic (PLEG): container finished" podID="29bceb3d-b7f4-43f6-bb98-834984940e5b" containerID="7e60a59376694e1847b9543a778045b77d4cc3a3743f8bfa2d638514e5ceb6d8" exitCode=0 Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.385541 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2aa9-account-create-update-9b55f" event={"ID":"29bceb3d-b7f4-43f6-bb98-834984940e5b","Type":"ContainerDied","Data":"7e60a59376694e1847b9543a778045b77d4cc3a3743f8bfa2d638514e5ceb6d8"} Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.830931 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.838161 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.977404 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx8qj\" (UniqueName: \"kubernetes.io/projected/d54c3930-0be3-4634-b250-921a22df3263-kube-api-access-bx8qj\") pod \"d54c3930-0be3-4634-b250-921a22df3263\" (UID: \"d54c3930-0be3-4634-b250-921a22df3263\") " Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.977514 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n77x\" (UniqueName: \"kubernetes.io/projected/faf370f4-55f9-49a5-86a8-751c2b6ff94d-kube-api-access-7n77x\") pod \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\" (UID: \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\") " Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.977557 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf370f4-55f9-49a5-86a8-751c2b6ff94d-operator-scripts\") pod \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\" (UID: \"faf370f4-55f9-49a5-86a8-751c2b6ff94d\") " Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.977655 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c3930-0be3-4634-b250-921a22df3263-operator-scripts\") pod \"d54c3930-0be3-4634-b250-921a22df3263\" (UID: \"d54c3930-0be3-4634-b250-921a22df3263\") " Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.978591 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54c3930-0be3-4634-b250-921a22df3263-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d54c3930-0be3-4634-b250-921a22df3263" (UID: "d54c3930-0be3-4634-b250-921a22df3263"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.979050 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf370f4-55f9-49a5-86a8-751c2b6ff94d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "faf370f4-55f9-49a5-86a8-751c2b6ff94d" (UID: "faf370f4-55f9-49a5-86a8-751c2b6ff94d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.983088 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf370f4-55f9-49a5-86a8-751c2b6ff94d-kube-api-access-7n77x" (OuterVolumeSpecName: "kube-api-access-7n77x") pod "faf370f4-55f9-49a5-86a8-751c2b6ff94d" (UID: "faf370f4-55f9-49a5-86a8-751c2b6ff94d"). InnerVolumeSpecName "kube-api-access-7n77x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:07 crc kubenswrapper[4770]: I1209 14:45:07.983199 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54c3930-0be3-4634-b250-921a22df3263-kube-api-access-bx8qj" (OuterVolumeSpecName: "kube-api-access-bx8qj") pod "d54c3930-0be3-4634-b250-921a22df3263" (UID: "d54c3930-0be3-4634-b250-921a22df3263"). InnerVolumeSpecName "kube-api-access-bx8qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.081189 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c3930-0be3-4634-b250-921a22df3263-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.081559 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx8qj\" (UniqueName: \"kubernetes.io/projected/d54c3930-0be3-4634-b250-921a22df3263-kube-api-access-bx8qj\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.081580 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n77x\" (UniqueName: \"kubernetes.io/projected/faf370f4-55f9-49a5-86a8-751c2b6ff94d-kube-api-access-7n77x\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.081596 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faf370f4-55f9-49a5-86a8-751c2b6ff94d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.395762 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3917b78f-5515-4149-82d3-96a981c77ac5","Type":"ContainerStarted","Data":"675f0c446c8279f5d0f19c0792ff3b13cfe85e5862edb9145780f8cb2db24da3"} Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.399256 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2215-account-create-update-wnhj4" event={"ID":"d54c3930-0be3-4634-b250-921a22df3263","Type":"ContainerDied","Data":"508295ea5f24d937ca739e3448202edb57fdbcbcd64bd6b98bccb522201e1b56"} Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.399286 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508295ea5f24d937ca739e3448202edb57fdbcbcd64bd6b98bccb522201e1b56" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.399334 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2215-account-create-update-wnhj4" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.403339 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k7qqf" event={"ID":"faf370f4-55f9-49a5-86a8-751c2b6ff94d","Type":"ContainerDied","Data":"eaa85e26a7afb15b9d1abfa5a20560df7db365a5a8ec6cd5387c2537a9db33e5"} Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.403377 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaa85e26a7afb15b9d1abfa5a20560df7db365a5a8ec6cd5387c2537a9db33e5" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.403335 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k7qqf" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.426308 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.830304251 podStartE2EDuration="1m21.426267669s" podCreationTimestamp="2025-12-09 14:43:47 +0000 UTC" firstStartedPulling="2025-12-09 14:44:14.401152471 +0000 UTC m=+1286.297354607" lastFinishedPulling="2025-12-09 14:45:07.997115889 +0000 UTC m=+1339.893318025" observedRunningTime="2025-12-09 14:45:08.422754121 +0000 UTC m=+1340.318956267" watchObservedRunningTime="2025-12-09 14:45:08.426267669 +0000 UTC m=+1340.322469805" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.766790 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 09 14:45:08 crc kubenswrapper[4770]: I1209 14:45:08.878897 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.004344 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb7qm\" (UniqueName: \"kubernetes.io/projected/491157c5-64af-4a59-8e9f-59695e2d7b6c-kube-api-access-mb7qm\") pod \"491157c5-64af-4a59-8e9f-59695e2d7b6c\" (UID: \"491157c5-64af-4a59-8e9f-59695e2d7b6c\") " Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.004504 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491157c5-64af-4a59-8e9f-59695e2d7b6c-operator-scripts\") pod \"491157c5-64af-4a59-8e9f-59695e2d7b6c\" (UID: \"491157c5-64af-4a59-8e9f-59695e2d7b6c\") " Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.006572 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/491157c5-64af-4a59-8e9f-59695e2d7b6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "491157c5-64af-4a59-8e9f-59695e2d7b6c" (UID: "491157c5-64af-4a59-8e9f-59695e2d7b6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.076047 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/491157c5-64af-4a59-8e9f-59695e2d7b6c-kube-api-access-mb7qm" (OuterVolumeSpecName: "kube-api-access-mb7qm") pod "491157c5-64af-4a59-8e9f-59695e2d7b6c" (UID: "491157c5-64af-4a59-8e9f-59695e2d7b6c"). InnerVolumeSpecName "kube-api-access-mb7qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.094312 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.096225 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.103871 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-54p89" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.106909 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb7qm\" (UniqueName: \"kubernetes.io/projected/491157c5-64af-4a59-8e9f-59695e2d7b6c-kube-api-access-mb7qm\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.106933 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491157c5-64af-4a59-8e9f-59695e2d7b6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.208357 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59zd9\" (UniqueName: \"kubernetes.io/projected/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-kube-api-access-59zd9\") pod \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\" (UID: \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\") " Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.208522 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7c967-d86d-4651-adf0-c7e2bc3eb428-operator-scripts\") pod \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\" (UID: \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\") " Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.208620 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bceb3d-b7f4-43f6-bb98-834984940e5b-operator-scripts\") pod \"29bceb3d-b7f4-43f6-bb98-834984940e5b\" (UID: \"29bceb3d-b7f4-43f6-bb98-834984940e5b\") " Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.208637 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-operator-scripts\") pod \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\" (UID: \"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c\") " Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.208676 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc7mn\" (UniqueName: \"kubernetes.io/projected/29bceb3d-b7f4-43f6-bb98-834984940e5b-kube-api-access-hc7mn\") pod \"29bceb3d-b7f4-43f6-bb98-834984940e5b\" (UID: \"29bceb3d-b7f4-43f6-bb98-834984940e5b\") " Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.208716 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-528gh\" (UniqueName: \"kubernetes.io/projected/82c7c967-d86d-4651-adf0-c7e2bc3eb428-kube-api-access-528gh\") pod \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\" (UID: \"82c7c967-d86d-4651-adf0-c7e2bc3eb428\") " Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.209159 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c7c967-d86d-4651-adf0-c7e2bc3eb428-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82c7c967-d86d-4651-adf0-c7e2bc3eb428" (UID: "82c7c967-d86d-4651-adf0-c7e2bc3eb428"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.209227 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bceb3d-b7f4-43f6-bb98-834984940e5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29bceb3d-b7f4-43f6-bb98-834984940e5b" (UID: "29bceb3d-b7f4-43f6-bb98-834984940e5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.209250 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11c8e0c9-b0a4-4dde-a79a-db5fa371e37c" (UID: "11c8e0c9-b0a4-4dde-a79a-db5fa371e37c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.211255 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-kube-api-access-59zd9" (OuterVolumeSpecName: "kube-api-access-59zd9") pod "11c8e0c9-b0a4-4dde-a79a-db5fa371e37c" (UID: "11c8e0c9-b0a4-4dde-a79a-db5fa371e37c"). InnerVolumeSpecName "kube-api-access-59zd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.211665 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c7c967-d86d-4651-adf0-c7e2bc3eb428-kube-api-access-528gh" (OuterVolumeSpecName: "kube-api-access-528gh") pod "82c7c967-d86d-4651-adf0-c7e2bc3eb428" (UID: "82c7c967-d86d-4651-adf0-c7e2bc3eb428"). InnerVolumeSpecName "kube-api-access-528gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.212235 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bceb3d-b7f4-43f6-bb98-834984940e5b-kube-api-access-hc7mn" (OuterVolumeSpecName: "kube-api-access-hc7mn") pod "29bceb3d-b7f4-43f6-bb98-834984940e5b" (UID: "29bceb3d-b7f4-43f6-bb98-834984940e5b"). InnerVolumeSpecName "kube-api-access-hc7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.310627 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bceb3d-b7f4-43f6-bb98-834984940e5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.310893 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.311058 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc7mn\" (UniqueName: \"kubernetes.io/projected/29bceb3d-b7f4-43f6-bb98-834984940e5b-kube-api-access-hc7mn\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.311141 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-528gh\" (UniqueName: \"kubernetes.io/projected/82c7c967-d86d-4651-adf0-c7e2bc3eb428-kube-api-access-528gh\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.311199 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59zd9\" (UniqueName: \"kubernetes.io/projected/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c-kube-api-access-59zd9\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.311262 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7c967-d86d-4651-adf0-c7e2bc3eb428-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.411797 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tdpdx" event={"ID":"11c8e0c9-b0a4-4dde-a79a-db5fa371e37c","Type":"ContainerDied","Data":"c7b33c5aa116539094202886e89dcdf129ffe2ffd5e5261c95e178a0de753b99"} Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.411834 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b33c5aa116539094202886e89dcdf129ffe2ffd5e5261c95e178a0de753b99" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.413025 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tdpdx" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.413069 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-695f-account-create-update-kfmvh" event={"ID":"491157c5-64af-4a59-8e9f-59695e2d7b6c","Type":"ContainerDied","Data":"ca7c5fd84b46a2a6da3946d9f8e24807f6c16eb59759102f83f2eef7a030ffad"} Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.413254 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7c5fd84b46a2a6da3946d9f8e24807f6c16eb59759102f83f2eef7a030ffad" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.413082 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-695f-account-create-update-kfmvh" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.414035 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2aa9-account-create-update-9b55f" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.414062 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2aa9-account-create-update-9b55f" event={"ID":"29bceb3d-b7f4-43f6-bb98-834984940e5b","Type":"ContainerDied","Data":"acf4c2e5d4faa13edbbc835524c64d9fa300fd60740348ad56a64fb4c3ea0f95"} Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.414144 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf4c2e5d4faa13edbbc835524c64d9fa300fd60740348ad56a64fb4c3ea0f95" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.416069 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-54p89" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.423455 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-54p89" event={"ID":"82c7c967-d86d-4651-adf0-c7e2bc3eb428","Type":"ContainerDied","Data":"c7e87ba5a3c332af8c637f8960dce98b88fa9c8d366e57ec9854b75fbe5849be"} Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.423498 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e87ba5a3c332af8c637f8960dce98b88fa9c8d366e57ec9854b75fbe5849be" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.766297 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.884999 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.977532 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vbhct"] Dec 09 14:45:09 crc kubenswrapper[4770]: I1209 14:45:09.977877 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-vbhct" podUID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" containerName="dnsmasq-dns" containerID="cri-o://b235b3e290b3ae8f48775feaca1808ec624cb6a8a92b6768794d4f8c659c1172" gracePeriod=10 Dec 09 14:45:10 crc kubenswrapper[4770]: E1209 14:45:10.122135 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b62ee43_2d8d_4d9d_a333_ba36d6755816.slice/crio-conmon-b235b3e290b3ae8f48775feaca1808ec624cb6a8a92b6768794d4f8c659c1172.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08118160_2e03_4319_97ed_051b92b14c1e.slice/crio-conmon-14ed4cfe50efb1e05cdce48b9d0f4834401ae088ae50e48f0e309a03a2b4cd51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08118160_2e03_4319_97ed_051b92b14c1e.slice/crio-14ed4cfe50efb1e05cdce48b9d0f4834401ae088ae50e48f0e309a03a2b4cd51.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.426369 4770 generic.go:334] "Generic (PLEG): container finished" podID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerID="a9a595d8db596474a74b7145ea47c3669fe97865184fa19856a0ba99f5f74215" exitCode=0 Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.426750 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerDied","Data":"a9a595d8db596474a74b7145ea47c3669fe97865184fa19856a0ba99f5f74215"} Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.433488 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vbhct" event={"ID":"1b62ee43-2d8d-4d9d-a333-ba36d6755816","Type":"ContainerDied","Data":"b235b3e290b3ae8f48775feaca1808ec624cb6a8a92b6768794d4f8c659c1172"} Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.433391 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" containerID="b235b3e290b3ae8f48775feaca1808ec624cb6a8a92b6768794d4f8c659c1172" exitCode=0 Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.438150 4770 generic.go:334] "Generic (PLEG): container finished" podID="08118160-2e03-4319-97ed-051b92b14c1e" containerID="14ed4cfe50efb1e05cdce48b9d0f4834401ae088ae50e48f0e309a03a2b4cd51" exitCode=0 Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.438514 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"08118160-2e03-4319-97ed-051b92b14c1e","Type":"ContainerDied","Data":"14ed4cfe50efb1e05cdce48b9d0f4834401ae088ae50e48f0e309a03a2b4cd51"} Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.527618 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.637432 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnfdf\" (UniqueName: \"kubernetes.io/projected/1b62ee43-2d8d-4d9d-a333-ba36d6755816-kube-api-access-gnfdf\") pod \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.637602 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-sb\") pod \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.637673 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-config\") pod \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.637713 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-nb\") pod \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.637762 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc\") pod \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.643857 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b62ee43-2d8d-4d9d-a333-ba36d6755816-kube-api-access-gnfdf" (OuterVolumeSpecName: "kube-api-access-gnfdf") pod "1b62ee43-2d8d-4d9d-a333-ba36d6755816" (UID: "1b62ee43-2d8d-4d9d-a333-ba36d6755816"). InnerVolumeSpecName "kube-api-access-gnfdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.683251 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-config" (OuterVolumeSpecName: "config") pod "1b62ee43-2d8d-4d9d-a333-ba36d6755816" (UID: "1b62ee43-2d8d-4d9d-a333-ba36d6755816"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.684512 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b62ee43-2d8d-4d9d-a333-ba36d6755816" (UID: "1b62ee43-2d8d-4d9d-a333-ba36d6755816"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.711545 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b62ee43-2d8d-4d9d-a333-ba36d6755816" (UID: "1b62ee43-2d8d-4d9d-a333-ba36d6755816"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:10 crc kubenswrapper[4770]: E1209 14:45:10.713570 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc podName:1b62ee43-2d8d-4d9d-a333-ba36d6755816 nodeName:}" failed. No retries permitted until 2025-12-09 14:45:11.213434333 +0000 UTC m=+1343.109636469 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc") pod "1b62ee43-2d8d-4d9d-a333-ba36d6755816" (UID: "1b62ee43-2d8d-4d9d-a333-ba36d6755816") : error deleting /var/lib/kubelet/pods/1b62ee43-2d8d-4d9d-a333-ba36d6755816/volume-subpaths: remove /var/lib/kubelet/pods/1b62ee43-2d8d-4d9d-a333-ba36d6755816/volume-subpaths: no such file or directory Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.743484 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnfdf\" (UniqueName: \"kubernetes.io/projected/1b62ee43-2d8d-4d9d-a333-ba36d6755816-kube-api-access-gnfdf\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.743818 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.743894 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:10 crc kubenswrapper[4770]: I1209 14:45:10.743958 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.263305 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc\") pod \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\" (UID: \"1b62ee43-2d8d-4d9d-a333-ba36d6755816\") " Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.263920 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b62ee43-2d8d-4d9d-a333-ba36d6755816" (UID: "1b62ee43-2d8d-4d9d-a333-ba36d6755816"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.264225 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b62ee43-2d8d-4d9d-a333-ba36d6755816-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.450039 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vbhct" event={"ID":"1b62ee43-2d8d-4d9d-a333-ba36d6755816","Type":"ContainerDied","Data":"6bafaaef0c5745d6f7404ae41335e856643bc0fc2266f554a2a0dc7fec6e1e99"} Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.450114 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vbhct" Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.450120 4770 scope.go:117] "RemoveContainer" containerID="b235b3e290b3ae8f48775feaca1808ec624cb6a8a92b6768794d4f8c659c1172" Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.476930 4770 scope.go:117] "RemoveContainer" containerID="dab547dbaf6a318640cf88cbe024faa57c57f3c4f6c3afc164e86d009256651a" Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.495028 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vbhct"] Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.504017 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vbhct"] Dec 09 14:45:11 crc kubenswrapper[4770]: I1209 14:45:11.674795 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:45:11 crc kubenswrapper[4770]: E1209 14:45:11.675203 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 14:45:11 crc kubenswrapper[4770]: E1209 14:45:11.675258 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 14:45:11 crc kubenswrapper[4770]: E1209 14:45:11.675334 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift podName:cbc15e71-9605-466b-8947-aa2ca716bc2d nodeName:}" failed. No retries permitted until 2025-12-09 14:45:27.675307181 +0000 UTC m=+1359.571509357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift") pod "swift-storage-0" (UID: "cbc15e71-9605-466b-8947-aa2ca716bc2d") : configmap "swift-ring-files" not found Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.601093 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" path="/var/lib/kubelet/pods/1b62ee43-2d8d-4d9d-a333-ba36d6755816/volumes" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.835534 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.993708 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gbddk"] Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994076 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c7c967-d86d-4651-adf0-c7e2bc3eb428" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994089 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c7c967-d86d-4651-adf0-c7e2bc3eb428" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994102 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" containerName="init" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994108 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" containerName="init" Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994116 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" containerName="dnsmasq-dns" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994123 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" containerName="dnsmasq-dns" Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994132 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54c3930-0be3-4634-b250-921a22df3263" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994139 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54c3930-0be3-4634-b250-921a22df3263" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994156 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7044d2d-0426-4e15-ae14-5734939f8884" containerName="collect-profiles" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994163 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7044d2d-0426-4e15-ae14-5734939f8884" containerName="collect-profiles" Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994173 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491157c5-64af-4a59-8e9f-59695e2d7b6c" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994178 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="491157c5-64af-4a59-8e9f-59695e2d7b6c" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994192 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c8e0c9-b0a4-4dde-a79a-db5fa371e37c" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994198 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c8e0c9-b0a4-4dde-a79a-db5fa371e37c" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994208 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bceb3d-b7f4-43f6-bb98-834984940e5b" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994215 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bceb3d-b7f4-43f6-bb98-834984940e5b" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: E1209 14:45:12.994225 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf370f4-55f9-49a5-86a8-751c2b6ff94d" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994232 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf370f4-55f9-49a5-86a8-751c2b6ff94d" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994383 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bceb3d-b7f4-43f6-bb98-834984940e5b" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994394 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c7c967-d86d-4651-adf0-c7e2bc3eb428" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994403 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="491157c5-64af-4a59-8e9f-59695e2d7b6c" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994412 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b62ee43-2d8d-4d9d-a333-ba36d6755816" containerName="dnsmasq-dns" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994423 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf370f4-55f9-49a5-86a8-751c2b6ff94d" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994431 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54c3930-0be3-4634-b250-921a22df3263" containerName="mariadb-account-create-update" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994437 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c8e0c9-b0a4-4dde-a79a-db5fa371e37c" containerName="mariadb-database-create" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.994448 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7044d2d-0426-4e15-ae14-5734939f8884" containerName="collect-profiles" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.995068 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.997140 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-c7ww5" Dec 09 14:45:12 crc kubenswrapper[4770]: I1209 14:45:12.999660 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.015268 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gbddk"] Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.104981 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-config-data\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.105111 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-db-sync-config-data\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.105418 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-combined-ca-bundle\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.105670 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl8gd\" (UniqueName: \"kubernetes.io/projected/b0c0c709-cb21-49e0-ba23-211f0cd1749d-kube-api-access-zl8gd\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.209888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-combined-ca-bundle\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.210034 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl8gd\" (UniqueName: \"kubernetes.io/projected/b0c0c709-cb21-49e0-ba23-211f0cd1749d-kube-api-access-zl8gd\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.210118 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-config-data\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.210181 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-db-sync-config-data\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.216185 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-db-sync-config-data\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.220625 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-config-data\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.228427 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl8gd\" (UniqueName: \"kubernetes.io/projected/b0c0c709-cb21-49e0-ba23-211f0cd1749d-kube-api-access-zl8gd\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.228592 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-combined-ca-bundle\") pod \"glance-db-sync-gbddk\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.317431 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gbddk" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.830880 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 09 14:45:13 crc kubenswrapper[4770]: I1209 14:45:13.929254 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gbddk"] Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.102092 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.103704 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.105663 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.105954 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.106310 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.106470 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6bf4r" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.126472 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.231100 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522fb59d-17cc-47a2-8c3d-1025aacfd292-config\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.231154 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.231176 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.231194 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrnk9\" (UniqueName: \"kubernetes.io/projected/522fb59d-17cc-47a2-8c3d-1025aacfd292-kube-api-access-qrnk9\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.231451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.231499 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/522fb59d-17cc-47a2-8c3d-1025aacfd292-scripts\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.231623 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/522fb59d-17cc-47a2-8c3d-1025aacfd292-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.243958 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.244004 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.333159 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.333219 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/522fb59d-17cc-47a2-8c3d-1025aacfd292-scripts\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.333329 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/522fb59d-17cc-47a2-8c3d-1025aacfd292-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.333392 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522fb59d-17cc-47a2-8c3d-1025aacfd292-config\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.333426 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.333453 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.333473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrnk9\" (UniqueName: \"kubernetes.io/projected/522fb59d-17cc-47a2-8c3d-1025aacfd292-kube-api-access-qrnk9\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.334177 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/522fb59d-17cc-47a2-8c3d-1025aacfd292-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.334852 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/522fb59d-17cc-47a2-8c3d-1025aacfd292-scripts\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.335122 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522fb59d-17cc-47a2-8c3d-1025aacfd292-config\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.340511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.340511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.342305 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/522fb59d-17cc-47a2-8c3d-1025aacfd292-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.356300 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrnk9\" (UniqueName: \"kubernetes.io/projected/522fb59d-17cc-47a2-8c3d-1025aacfd292-kube-api-access-qrnk9\") pod \"ovn-northd-0\" (UID: \"522fb59d-17cc-47a2-8c3d-1025aacfd292\") " pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.387625 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.443065 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.508097 4770 generic.go:334] "Generic (PLEG): container finished" podID="b8f30831-ad4f-4009-b177-e645f911f5b4" containerID="adb65e1a987e40c8f3c2ef099964e0926fe0f5c9362de5861e32b9695a4ca187" exitCode=0 Dec 09 14:45:14 crc kubenswrapper[4770]: I1209 14:45:14.508158 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dshr4" event={"ID":"b8f30831-ad4f-4009-b177-e645f911f5b4","Type":"ContainerDied","Data":"adb65e1a987e40c8f3c2ef099964e0926fe0f5c9362de5861e32b9695a4ca187"} Dec 09 14:45:15 crc kubenswrapper[4770]: W1209 14:45:15.486398 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0c0c709_cb21_49e0_ba23_211f0cd1749d.slice/crio-615d5bac356118778c4074709303a00380944b5f723626ca4e3e626cc3e31d5c WatchSource:0}: Error finding container 615d5bac356118778c4074709303a00380944b5f723626ca4e3e626cc3e31d5c: Status 404 returned error can't find the container with id 615d5bac356118778c4074709303a00380944b5f723626ca4e3e626cc3e31d5c Dec 09 14:45:15 crc kubenswrapper[4770]: I1209 14:45:15.519512 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gbddk" event={"ID":"b0c0c709-cb21-49e0-ba23-211f0cd1749d","Type":"ContainerStarted","Data":"615d5bac356118778c4074709303a00380944b5f723626ca4e3e626cc3e31d5c"} Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.002820 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 14:45:16 crc kubenswrapper[4770]: W1209 14:45:16.008881 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod522fb59d_17cc_47a2_8c3d_1025aacfd292.slice/crio-0c1615c9de941824956cdcfb9fd6a28573c8ac6ac866d2b2197a2f436f208eca WatchSource:0}: Error finding container 0c1615c9de941824956cdcfb9fd6a28573c8ac6ac866d2b2197a2f436f208eca: Status 404 returned error can't find the container with id 0c1615c9de941824956cdcfb9fd6a28573c8ac6ac866d2b2197a2f436f208eca Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.031657 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.065192 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-combined-ca-bundle\") pod \"b8f30831-ad4f-4009-b177-e645f911f5b4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.065274 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-scripts\") pod \"b8f30831-ad4f-4009-b177-e645f911f5b4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.065344 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b8f30831-ad4f-4009-b177-e645f911f5b4-etc-swift\") pod \"b8f30831-ad4f-4009-b177-e645f911f5b4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.065428 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-swiftconf\") pod \"b8f30831-ad4f-4009-b177-e645f911f5b4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.065508 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xpf6\" (UniqueName: \"kubernetes.io/projected/b8f30831-ad4f-4009-b177-e645f911f5b4-kube-api-access-4xpf6\") pod \"b8f30831-ad4f-4009-b177-e645f911f5b4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.065528 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-dispersionconf\") pod \"b8f30831-ad4f-4009-b177-e645f911f5b4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.065544 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-ring-data-devices\") pod \"b8f30831-ad4f-4009-b177-e645f911f5b4\" (UID: \"b8f30831-ad4f-4009-b177-e645f911f5b4\") " Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.072318 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b8f30831-ad4f-4009-b177-e645f911f5b4" (UID: "b8f30831-ad4f-4009-b177-e645f911f5b4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.073067 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f30831-ad4f-4009-b177-e645f911f5b4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b8f30831-ad4f-4009-b177-e645f911f5b4" (UID: "b8f30831-ad4f-4009-b177-e645f911f5b4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.078378 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b8f30831-ad4f-4009-b177-e645f911f5b4" (UID: "b8f30831-ad4f-4009-b177-e645f911f5b4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.082396 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f30831-ad4f-4009-b177-e645f911f5b4-kube-api-access-4xpf6" (OuterVolumeSpecName: "kube-api-access-4xpf6") pod "b8f30831-ad4f-4009-b177-e645f911f5b4" (UID: "b8f30831-ad4f-4009-b177-e645f911f5b4"). InnerVolumeSpecName "kube-api-access-4xpf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.103404 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8f30831-ad4f-4009-b177-e645f911f5b4" (UID: "b8f30831-ad4f-4009-b177-e645f911f5b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.103900 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b8f30831-ad4f-4009-b177-e645f911f5b4" (UID: "b8f30831-ad4f-4009-b177-e645f911f5b4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.105167 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-scripts" (OuterVolumeSpecName: "scripts") pod "b8f30831-ad4f-4009-b177-e645f911f5b4" (UID: "b8f30831-ad4f-4009-b177-e645f911f5b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.167822 4770 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.167857 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xpf6\" (UniqueName: \"kubernetes.io/projected/b8f30831-ad4f-4009-b177-e645f911f5b4-kube-api-access-4xpf6\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.167871 4770 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.167881 4770 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.167896 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f30831-ad4f-4009-b177-e645f911f5b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.167905 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8f30831-ad4f-4009-b177-e645f911f5b4-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.167915 4770 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b8f30831-ad4f-4009-b177-e645f911f5b4-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.460544 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-75lfm" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.537633 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dshr4" event={"ID":"b8f30831-ad4f-4009-b177-e645f911f5b4","Type":"ContainerDied","Data":"7c7eaf34dc65b3826f5d36183942764836f0dd992ebdf773f07d2c51fcbaf226"} Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.537671 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c7eaf34dc65b3826f5d36183942764836f0dd992ebdf773f07d2c51fcbaf226" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.537770 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dshr4" Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.540814 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"08118160-2e03-4319-97ed-051b92b14c1e","Type":"ContainerStarted","Data":"8416d3a0ac052aa4bed292c14cb0a49ecdd06984a2e810d3590bc037ccf1ec82"} Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.549573 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"522fb59d-17cc-47a2-8c3d-1025aacfd292","Type":"ContainerStarted","Data":"0c1615c9de941824956cdcfb9fd6a28573c8ac6ac866d2b2197a2f436f208eca"} Dec 09 14:45:16 crc kubenswrapper[4770]: I1209 14:45:16.794569 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-6tmxj" Dec 09 14:45:17 crc kubenswrapper[4770]: I1209 14:45:17.092111 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp" Dec 09 14:45:17 crc kubenswrapper[4770]: I1209 14:45:17.993208 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 14:45:17 crc kubenswrapper[4770]: I1209 14:45:17.999192 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="67dab40a-3d7c-4737-bca9-28dc6280071c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 14:45:18 crc kubenswrapper[4770]: I1209 14:45:18.000211 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 14:45:20 crc kubenswrapper[4770]: I1209 14:45:20.605263 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Dec 09 14:45:20 crc kubenswrapper[4770]: I1209 14:45:20.605737 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerStarted","Data":"c57f67cb4699c1e99bf8bbd21e94cdb66f7a76230afe74af9f578b7723910ef6"} Dec 09 14:45:20 crc kubenswrapper[4770]: I1209 14:45:20.605318 4770 generic.go:334] "Generic (PLEG): container finished" podID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" containerID="d49bf7eb57ed05ebfbd4c93d645ad809edd662e415eb80a9cfd4911f513fdb00" exitCode=0 Dec 09 14:45:20 crc kubenswrapper[4770]: I1209 14:45:20.605759 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"08118160-2e03-4319-97ed-051b92b14c1e","Type":"ContainerStarted","Data":"20c70d8ff8079aa6dc049eee0738a25c0fb9fe43bb0e0d3914b3fe4e65adda08"} Dec 09 14:45:20 crc kubenswrapper[4770]: I1209 14:45:20.605805 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Dec 09 14:45:20 crc kubenswrapper[4770]: I1209 14:45:20.605816 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469","Type":"ContainerDied","Data":"d49bf7eb57ed05ebfbd4c93d645ad809edd662e415eb80a9cfd4911f513fdb00"} Dec 09 14:45:20 crc kubenswrapper[4770]: I1209 14:45:20.626157 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=29.967726024 podStartE2EDuration="1m36.626139613s" podCreationTimestamp="2025-12-09 14:43:44 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.236537217 +0000 UTC m=+1281.132739353" lastFinishedPulling="2025-12-09 14:45:15.894950806 +0000 UTC m=+1347.791152942" observedRunningTime="2025-12-09 14:45:20.619480649 +0000 UTC m=+1352.515682785" watchObservedRunningTime="2025-12-09 14:45:20.626139613 +0000 UTC m=+1352.522341749" Dec 09 14:45:21 crc kubenswrapper[4770]: I1209 14:45:21.616464 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"522fb59d-17cc-47a2-8c3d-1025aacfd292","Type":"ContainerStarted","Data":"04ab5a2b74fdd8d18435f474607e7f774f5e0d76a75a103a9fc8f22abaad93a3"} Dec 09 14:45:21 crc kubenswrapper[4770]: I1209 14:45:21.617298 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 09 14:45:21 crc kubenswrapper[4770]: I1209 14:45:21.617322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"522fb59d-17cc-47a2-8c3d-1025aacfd292","Type":"ContainerStarted","Data":"91d0117e9a8f68108bde3924235e3b8dc0f3393f38cc078e87237c92a4ca8c45"} Dec 09 14:45:21 crc kubenswrapper[4770]: I1209 14:45:21.619333 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469","Type":"ContainerStarted","Data":"fd0ac14296b88363575acbf86bb4972dc526f2fdb48cee7ef5aa1e33594edf34"} Dec 09 14:45:21 crc kubenswrapper[4770]: I1209 14:45:21.620286 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 14:45:21 crc kubenswrapper[4770]: I1209 14:45:21.644224 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.7508517169999998 podStartE2EDuration="7.644208211s" podCreationTimestamp="2025-12-09 14:45:14 +0000 UTC" firstStartedPulling="2025-12-09 14:45:16.010966593 +0000 UTC m=+1347.907168729" lastFinishedPulling="2025-12-09 14:45:20.904323087 +0000 UTC m=+1352.800525223" observedRunningTime="2025-12-09 14:45:21.640946301 +0000 UTC m=+1353.537148437" watchObservedRunningTime="2025-12-09 14:45:21.644208211 +0000 UTC m=+1353.540410337" Dec 09 14:45:21 crc kubenswrapper[4770]: I1209 14:45:21.679155 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.232914341 podStartE2EDuration="1m44.679131649s" podCreationTimestamp="2025-12-09 14:43:37 +0000 UTC" firstStartedPulling="2025-12-09 14:43:39.8081296 +0000 UTC m=+1251.704331736" lastFinishedPulling="2025-12-09 14:44:46.254346898 +0000 UTC m=+1318.150549044" observedRunningTime="2025-12-09 14:45:21.668931327 +0000 UTC m=+1353.565133483" watchObservedRunningTime="2025-12-09 14:45:21.679131649 +0000 UTC m=+1353.575333795" Dec 09 14:45:23 crc kubenswrapper[4770]: I1209 14:45:23.642886 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerStarted","Data":"17d4127ceba499a5f6ddf6af23af2490c8233cbed8319b50eda60155e15c77c6"} Dec 09 14:45:27 crc kubenswrapper[4770]: I1209 14:45:27.759311 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:45:27 crc kubenswrapper[4770]: I1209 14:45:27.769242 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cbc15e71-9605-466b-8947-aa2ca716bc2d-etc-swift\") pod \"swift-storage-0\" (UID: \"cbc15e71-9605-466b-8947-aa2ca716bc2d\") " pod="openstack/swift-storage-0" Dec 09 14:45:27 crc kubenswrapper[4770]: I1209 14:45:27.859396 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 14:45:27 crc kubenswrapper[4770]: I1209 14:45:27.987547 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="67dab40a-3d7c-4737-bca9-28dc6280071c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 14:45:31 crc kubenswrapper[4770]: E1209 14:45:31.137031 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 09 14:45:31 crc kubenswrapper[4770]: E1209 14:45:31.138077 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl8gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-gbddk_openstack(b0c0c709-cb21-49e0-ba23-211f0cd1749d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:45:31 crc kubenswrapper[4770]: E1209 14:45:31.139627 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-gbddk" podUID="b0c0c709-cb21-49e0-ba23-211f0cd1749d" Dec 09 14:45:31 crc kubenswrapper[4770]: I1209 14:45:31.724858 4770 generic.go:334] "Generic (PLEG): container finished" podID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerID="0124084ad1afd737a4b241af8de506900eae154a555a4d4a8ce733160ae30b59" exitCode=0 Dec 09 14:45:31 crc kubenswrapper[4770]: I1209 14:45:31.725828 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31bb1b14-4de1-4586-8bde-d29afdaad6fd","Type":"ContainerDied","Data":"0124084ad1afd737a4b241af8de506900eae154a555a4d4a8ce733160ae30b59"} Dec 09 14:45:31 crc kubenswrapper[4770]: I1209 14:45:31.727483 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 14:45:31 crc kubenswrapper[4770]: E1209 14:45:31.730394 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-gbddk" podUID="b0c0c709-cb21-49e0-ba23-211f0cd1749d" Dec 09 14:45:32 crc kubenswrapper[4770]: I1209 14:45:32.741387 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31bb1b14-4de1-4586-8bde-d29afdaad6fd","Type":"ContainerStarted","Data":"769cd2dac8b651775fe4e262a9cfff69636101a7f24c7a371e86957852a08b3f"} Dec 09 14:45:32 crc kubenswrapper[4770]: I1209 14:45:32.741993 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:45:32 crc kubenswrapper[4770]: I1209 14:45:32.743895 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"7f50b0825c4790836f94e0ae4ce8dd70c629e5d301fa4dd4fde4518b805fc834"} Dec 09 14:45:32 crc kubenswrapper[4770]: I1209 14:45:32.770177 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371922.084625 podStartE2EDuration="1m54.770151323s" podCreationTimestamp="2025-12-09 14:43:38 +0000 UTC" firstStartedPulling="2025-12-09 14:43:40.108190339 +0000 UTC m=+1252.004392465" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:32.766810474 +0000 UTC m=+1364.663012610" watchObservedRunningTime="2025-12-09 14:45:32.770151323 +0000 UTC m=+1364.666353459" Dec 09 14:45:34 crc kubenswrapper[4770]: I1209 14:45:34.107019 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-slxb5" podUID="8eebc5e5-e737-4171-abed-1e04fa89b0b4" containerName="ovn-controller" probeResult="failure" output=< Dec 09 14:45:34 crc kubenswrapper[4770]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 14:45:34 crc kubenswrapper[4770]: > Dec 09 14:45:34 crc kubenswrapper[4770]: I1209 14:45:34.517600 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 09 14:45:35 crc kubenswrapper[4770]: I1209 14:45:35.774011 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerStarted","Data":"e2e9e0386cf941da98edf8f7eb32466b1956050d3cd236961bfd89f1f44bce35"} Dec 09 14:45:35 crc kubenswrapper[4770]: I1209 14:45:35.776569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"da9a0f0f9f8160b65127b57bbf8b34cd063b35f92227c01a4c40221d3b0c9125"} Dec 09 14:45:35 crc kubenswrapper[4770]: I1209 14:45:35.776608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"276bdd53ee872d5cf02e9c26a1d642ed8825ee555327591ff754fc63852a3ff6"} Dec 09 14:45:35 crc kubenswrapper[4770]: I1209 14:45:35.788407 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:35 crc kubenswrapper[4770]: I1209 14:45:35.803329 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=25.782521083 podStartE2EDuration="1m51.803305404s" podCreationTimestamp="2025-12-09 14:43:44 +0000 UTC" firstStartedPulling="2025-12-09 14:44:09.243064917 +0000 UTC m=+1281.139267053" lastFinishedPulling="2025-12-09 14:45:35.263849238 +0000 UTC m=+1367.160051374" observedRunningTime="2025-12-09 14:45:35.800380836 +0000 UTC m=+1367.696582972" watchObservedRunningTime="2025-12-09 14:45:35.803305404 +0000 UTC m=+1367.699507540" Dec 09 14:45:36 crc kubenswrapper[4770]: I1209 14:45:36.788866 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"2159742659af20c95a3bbd5b5e8555cdb2dba2d119507243702bd71f5f148d5b"} Dec 09 14:45:36 crc kubenswrapper[4770]: I1209 14:45:36.789139 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"79ca62a1efe2b6e0dfd289883fa248a26922b2e75deb500fae3ebe61ff5f1e94"} Dec 09 14:45:37 crc kubenswrapper[4770]: I1209 14:45:37.826421 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"c3637b97c43b8eadacc6b940a6cdaa27089f16e09daffd2b9e73c9e938c22f53"} Dec 09 14:45:37 crc kubenswrapper[4770]: I1209 14:45:37.993399 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="67dab40a-3d7c-4737-bca9-28dc6280071c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 14:45:38 crc kubenswrapper[4770]: I1209 14:45:38.835930 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"5348345dc3ace87cee32ffdaad4e4c7f641ca80cb05c7d9e50c4c9b84057925f"} Dec 09 14:45:38 crc kubenswrapper[4770]: I1209 14:45:38.835973 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"d0fae50195a0a69ce1f0d5ca27b51c5e531de7e258f38af0c91c07bbf8d71cba"} Dec 09 14:45:38 crc kubenswrapper[4770]: I1209 14:45:38.835983 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"7958f164c68e2ecc46ef9445150265308e4617f4affe087efc6e7cb9b622a6be"} Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.128544 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-slxb5" podUID="8eebc5e5-e737-4171-abed-1e04fa89b0b4" containerName="ovn-controller" probeResult="failure" output=< Dec 09 14:45:39 crc kubenswrapper[4770]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 14:45:39 crc kubenswrapper[4770]: > Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.142657 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.143252 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5ng4w" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.247947 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.384180 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-slxb5-config-l48f2"] Dec 09 14:45:39 crc kubenswrapper[4770]: E1209 14:45:39.384953 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f30831-ad4f-4009-b177-e645f911f5b4" containerName="swift-ring-rebalance" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.384971 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f30831-ad4f-4009-b177-e645f911f5b4" containerName="swift-ring-rebalance" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.385154 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f30831-ad4f-4009-b177-e645f911f5b4" containerName="swift-ring-rebalance" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.386241 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.390983 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.397752 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-slxb5-config-l48f2"] Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.497716 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znwjj\" (UniqueName: \"kubernetes.io/projected/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-kube-api-access-znwjj\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.497807 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.497877 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-log-ovn\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.497926 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-additional-scripts\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.498023 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run-ovn\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.498193 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-scripts\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.600320 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znwjj\" (UniqueName: \"kubernetes.io/projected/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-kube-api-access-znwjj\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.600373 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.600420 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-log-ovn\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.600451 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-additional-scripts\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.600487 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run-ovn\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.600518 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-scripts\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.601695 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.601801 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-log-ovn\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.602511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-additional-scripts\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.602586 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run-ovn\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.602934 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-scripts\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.645348 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znwjj\" (UniqueName: \"kubernetes.io/projected/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-kube-api-access-znwjj\") pod \"ovn-controller-slxb5-config-l48f2\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.653382 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ckkhz"] Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.660070 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.682751 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ckkhz"] Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.715250 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.829680 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq62p\" (UniqueName: \"kubernetes.io/projected/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-kube-api-access-bq62p\") pod \"cinder-db-create-ckkhz\" (UID: \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\") " pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.829768 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-operator-scripts\") pod \"cinder-db-create-ckkhz\" (UID: \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\") " pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.855212 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2bfb-account-create-update-jsvdf"] Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.856866 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.861031 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.908800 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2bfb-account-create-update-jsvdf"] Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.931930 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq62p\" (UniqueName: \"kubernetes.io/projected/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-kube-api-access-bq62p\") pod \"cinder-db-create-ckkhz\" (UID: \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\") " pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.931983 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-operator-scripts\") pod \"cinder-db-create-ckkhz\" (UID: \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\") " pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.932699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-operator-scripts\") pod \"cinder-db-create-ckkhz\" (UID: \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\") " pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.968166 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-769fn"] Dec 09 14:45:39 crc kubenswrapper[4770]: I1209 14:45:39.969775 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-769fn" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.007958 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a98f-account-create-update-8gx6q"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.010354 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.015953 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.018854 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-769fn"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.022927 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq62p\" (UniqueName: \"kubernetes.io/projected/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-kube-api-access-bq62p\") pod \"cinder-db-create-ckkhz\" (UID: \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\") " pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.031350 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a98f-account-create-update-8gx6q"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.035921 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grkc\" (UniqueName: \"kubernetes.io/projected/508799c0-ff24-43d1-b131-cf60b96facfd-kube-api-access-9grkc\") pod \"cinder-2bfb-account-create-update-jsvdf\" (UID: \"508799c0-ff24-43d1-b131-cf60b96facfd\") " pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.036064 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508799c0-ff24-43d1-b131-cf60b96facfd-operator-scripts\") pod \"cinder-2bfb-account-create-update-jsvdf\" (UID: \"508799c0-ff24-43d1-b131-cf60b96facfd\") " pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.145642 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-operator-scripts\") pod \"barbican-a98f-account-create-update-8gx6q\" (UID: \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\") " pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.146062 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grkc\" (UniqueName: \"kubernetes.io/projected/508799c0-ff24-43d1-b131-cf60b96facfd-kube-api-access-9grkc\") pod \"cinder-2bfb-account-create-update-jsvdf\" (UID: \"508799c0-ff24-43d1-b131-cf60b96facfd\") " pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.146154 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95983696-25bb-4b13-8a96-59b7af59dda4-operator-scripts\") pod \"barbican-db-create-769fn\" (UID: \"95983696-25bb-4b13-8a96-59b7af59dda4\") " pod="openstack/barbican-db-create-769fn" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.146220 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snkgm\" (UniqueName: \"kubernetes.io/projected/95983696-25bb-4b13-8a96-59b7af59dda4-kube-api-access-snkgm\") pod \"barbican-db-create-769fn\" (UID: \"95983696-25bb-4b13-8a96-59b7af59dda4\") " pod="openstack/barbican-db-create-769fn" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.146313 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508799c0-ff24-43d1-b131-cf60b96facfd-operator-scripts\") pod \"cinder-2bfb-account-create-update-jsvdf\" (UID: \"508799c0-ff24-43d1-b131-cf60b96facfd\") " pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.146413 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzwhm\" (UniqueName: \"kubernetes.io/projected/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-kube-api-access-fzwhm\") pod \"barbican-a98f-account-create-update-8gx6q\" (UID: \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\") " pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.148017 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508799c0-ff24-43d1-b131-cf60b96facfd-operator-scripts\") pod \"cinder-2bfb-account-create-update-jsvdf\" (UID: \"508799c0-ff24-43d1-b131-cf60b96facfd\") " pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.170356 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xn76l"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.177494 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.182158 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fnhpf" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.182289 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.182443 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.182305 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.195650 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grkc\" (UniqueName: \"kubernetes.io/projected/508799c0-ff24-43d1-b131-cf60b96facfd-kube-api-access-9grkc\") pod \"cinder-2bfb-account-create-update-jsvdf\" (UID: \"508799c0-ff24-43d1-b131-cf60b96facfd\") " pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.244213 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xn76l"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.248153 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzwhm\" (UniqueName: \"kubernetes.io/projected/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-kube-api-access-fzwhm\") pod \"barbican-a98f-account-create-update-8gx6q\" (UID: \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\") " pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.248258 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-operator-scripts\") pod \"barbican-a98f-account-create-update-8gx6q\" (UID: \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\") " pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.248302 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95983696-25bb-4b13-8a96-59b7af59dda4-operator-scripts\") pod \"barbican-db-create-769fn\" (UID: \"95983696-25bb-4b13-8a96-59b7af59dda4\") " pod="openstack/barbican-db-create-769fn" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.248338 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snkgm\" (UniqueName: \"kubernetes.io/projected/95983696-25bb-4b13-8a96-59b7af59dda4-kube-api-access-snkgm\") pod \"barbican-db-create-769fn\" (UID: \"95983696-25bb-4b13-8a96-59b7af59dda4\") " pod="openstack/barbican-db-create-769fn" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.249380 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-operator-scripts\") pod \"barbican-a98f-account-create-update-8gx6q\" (UID: \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\") " pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.249915 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95983696-25bb-4b13-8a96-59b7af59dda4-operator-scripts\") pod \"barbican-db-create-769fn\" (UID: \"95983696-25bb-4b13-8a96-59b7af59dda4\") " pod="openstack/barbican-db-create-769fn" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.277336 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snkgm\" (UniqueName: \"kubernetes.io/projected/95983696-25bb-4b13-8a96-59b7af59dda4-kube-api-access-snkgm\") pod \"barbican-db-create-769fn\" (UID: \"95983696-25bb-4b13-8a96-59b7af59dda4\") " pod="openstack/barbican-db-create-769fn" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.279952 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzwhm\" (UniqueName: \"kubernetes.io/projected/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-kube-api-access-fzwhm\") pod \"barbican-a98f-account-create-update-8gx6q\" (UID: \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\") " pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.287043 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-48e7-account-create-update-lwm9s"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.288751 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.289195 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.290484 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.297476 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-48e7-account-create-update-lwm9s"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.303175 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.344368 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-769fn" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.350430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-combined-ca-bundle\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.350596 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldkg\" (UniqueName: \"kubernetes.io/projected/a0ed519f-b86b-4baf-a731-ffe46bc15641-kube-api-access-7ldkg\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.350647 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-config-data\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.365324 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-xcbxz"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.366625 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.377351 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-xcbxz"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.451820 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5zmr\" (UniqueName: \"kubernetes.io/projected/23bb07cc-d85a-4cbd-a8ed-429d01349e74-kube-api-access-s5zmr\") pod \"cloudkitty-db-create-xcbxz\" (UID: \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\") " pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.452072 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldkg\" (UniqueName: \"kubernetes.io/projected/a0ed519f-b86b-4baf-a731-ffe46bc15641-kube-api-access-7ldkg\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.452111 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23bb07cc-d85a-4cbd-a8ed-429d01349e74-operator-scripts\") pod \"cloudkitty-db-create-xcbxz\" (UID: \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\") " pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.452140 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-config-data\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.452175 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-combined-ca-bundle\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.452199 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgth2\" (UniqueName: \"kubernetes.io/projected/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-kube-api-access-mgth2\") pod \"cloudkitty-48e7-account-create-update-lwm9s\" (UID: \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\") " pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.452263 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-operator-scripts\") pod \"cloudkitty-48e7-account-create-update-lwm9s\" (UID: \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\") " pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.454864 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.461087 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-config-data\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.471571 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-combined-ca-bundle\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.474781 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9qxdb"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.476059 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.479835 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldkg\" (UniqueName: \"kubernetes.io/projected/a0ed519f-b86b-4baf-a731-ffe46bc15641-kube-api-access-7ldkg\") pod \"keystone-db-sync-xn76l\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.509150 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xn76l" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.517718 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8111-account-create-update-9hcm5"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.519350 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.527680 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.533614 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8111-account-create-update-9hcm5"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.554035 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgth2\" (UniqueName: \"kubernetes.io/projected/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-kube-api-access-mgth2\") pod \"cloudkitty-48e7-account-create-update-lwm9s\" (UID: \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\") " pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.554127 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-operator-scripts\") pod \"cloudkitty-48e7-account-create-update-lwm9s\" (UID: \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\") " pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.554156 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39626459-4da7-4aae-b9b4-187a18cd1c3e-operator-scripts\") pod \"neutron-db-create-9qxdb\" (UID: \"39626459-4da7-4aae-b9b4-187a18cd1c3e\") " pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.554210 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5zmr\" (UniqueName: \"kubernetes.io/projected/23bb07cc-d85a-4cbd-a8ed-429d01349e74-kube-api-access-s5zmr\") pod \"cloudkitty-db-create-xcbxz\" (UID: \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\") " pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.554230 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnnm\" (UniqueName: \"kubernetes.io/projected/39626459-4da7-4aae-b9b4-187a18cd1c3e-kube-api-access-lxnnm\") pod \"neutron-db-create-9qxdb\" (UID: \"39626459-4da7-4aae-b9b4-187a18cd1c3e\") " pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.554262 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23bb07cc-d85a-4cbd-a8ed-429d01349e74-operator-scripts\") pod \"cloudkitty-db-create-xcbxz\" (UID: \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\") " pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.556838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23bb07cc-d85a-4cbd-a8ed-429d01349e74-operator-scripts\") pod \"cloudkitty-db-create-xcbxz\" (UID: \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\") " pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.557554 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-operator-scripts\") pod \"cloudkitty-48e7-account-create-update-lwm9s\" (UID: \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\") " pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.566577 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9qxdb"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.600356 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgth2\" (UniqueName: \"kubernetes.io/projected/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-kube-api-access-mgth2\") pod \"cloudkitty-48e7-account-create-update-lwm9s\" (UID: \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\") " pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.625223 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5zmr\" (UniqueName: \"kubernetes.io/projected/23bb07cc-d85a-4cbd-a8ed-429d01349e74-kube-api-access-s5zmr\") pod \"cloudkitty-db-create-xcbxz\" (UID: \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\") " pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.651540 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.653010 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-slxb5-config-l48f2"] Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.659197 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39626459-4da7-4aae-b9b4-187a18cd1c3e-operator-scripts\") pod \"neutron-db-create-9qxdb\" (UID: \"39626459-4da7-4aae-b9b4-187a18cd1c3e\") " pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.659282 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec87ce87-2276-4c01-bf53-60236cda1e26-operator-scripts\") pod \"neutron-8111-account-create-update-9hcm5\" (UID: \"ec87ce87-2276-4c01-bf53-60236cda1e26\") " pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.659346 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstrh\" (UniqueName: \"kubernetes.io/projected/ec87ce87-2276-4c01-bf53-60236cda1e26-kube-api-access-bstrh\") pod \"neutron-8111-account-create-update-9hcm5\" (UID: \"ec87ce87-2276-4c01-bf53-60236cda1e26\") " pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.659384 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxnnm\" (UniqueName: \"kubernetes.io/projected/39626459-4da7-4aae-b9b4-187a18cd1c3e-kube-api-access-lxnnm\") pod \"neutron-db-create-9qxdb\" (UID: \"39626459-4da7-4aae-b9b4-187a18cd1c3e\") " pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.660470 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39626459-4da7-4aae-b9b4-187a18cd1c3e-operator-scripts\") pod \"neutron-db-create-9qxdb\" (UID: \"39626459-4da7-4aae-b9b4-187a18cd1c3e\") " pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.692602 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxnnm\" (UniqueName: \"kubernetes.io/projected/39626459-4da7-4aae-b9b4-187a18cd1c3e-kube-api-access-lxnnm\") pod \"neutron-db-create-9qxdb\" (UID: \"39626459-4da7-4aae-b9b4-187a18cd1c3e\") " pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.707501 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:40 crc kubenswrapper[4770]: W1209 14:45:40.721045 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5a1feb7_615c_43ec_ad5b_4691c3a42f96.slice/crio-2b2a0fa68e0e3c3d21c69f5a5971c33b15d1f115c9f0030768bd1ab4116eda19 WatchSource:0}: Error finding container 2b2a0fa68e0e3c3d21c69f5a5971c33b15d1f115c9f0030768bd1ab4116eda19: Status 404 returned error can't find the container with id 2b2a0fa68e0e3c3d21c69f5a5971c33b15d1f115c9f0030768bd1ab4116eda19 Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.762533 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstrh\" (UniqueName: \"kubernetes.io/projected/ec87ce87-2276-4c01-bf53-60236cda1e26-kube-api-access-bstrh\") pod \"neutron-8111-account-create-update-9hcm5\" (UID: \"ec87ce87-2276-4c01-bf53-60236cda1e26\") " pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.763232 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec87ce87-2276-4c01-bf53-60236cda1e26-operator-scripts\") pod \"neutron-8111-account-create-update-9hcm5\" (UID: \"ec87ce87-2276-4c01-bf53-60236cda1e26\") " pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.765465 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec87ce87-2276-4c01-bf53-60236cda1e26-operator-scripts\") pod \"neutron-8111-account-create-update-9hcm5\" (UID: \"ec87ce87-2276-4c01-bf53-60236cda1e26\") " pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.837844 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.863608 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstrh\" (UniqueName: \"kubernetes.io/projected/ec87ce87-2276-4c01-bf53-60236cda1e26-kube-api-access-bstrh\") pod \"neutron-8111-account-create-update-9hcm5\" (UID: \"ec87ce87-2276-4c01-bf53-60236cda1e26\") " pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.880753 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.906187 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-slxb5-config-l48f2" event={"ID":"a5a1feb7-615c-43ec-ad5b-4691c3a42f96","Type":"ContainerStarted","Data":"2b2a0fa68e0e3c3d21c69f5a5971c33b15d1f115c9f0030768bd1ab4116eda19"} Dec 09 14:45:40 crc kubenswrapper[4770]: I1209 14:45:40.948479 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"a2c2f163690f9c22c37354cfc201311d937bfbef26e83394cb47e2c3d5bf7b8e"} Dec 09 14:45:41 crc kubenswrapper[4770]: I1209 14:45:41.095326 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2bfb-account-create-update-jsvdf"] Dec 09 14:45:41 crc kubenswrapper[4770]: I1209 14:45:41.174839 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-769fn"] Dec 09 14:45:41 crc kubenswrapper[4770]: I1209 14:45:41.203230 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ckkhz"] Dec 09 14:45:41 crc kubenswrapper[4770]: W1209 14:45:41.345928 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95983696_25bb_4b13_8a96_59b7af59dda4.slice/crio-477fa24386f643456fdd78354acfefb04810e309bfebb231cf8cbb79373c4233 WatchSource:0}: Error finding container 477fa24386f643456fdd78354acfefb04810e309bfebb231cf8cbb79373c4233: Status 404 returned error can't find the container with id 477fa24386f643456fdd78354acfefb04810e309bfebb231cf8cbb79373c4233 Dec 09 14:45:42 crc kubenswrapper[4770]: I1209 14:45:41.970965 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ckkhz" event={"ID":"c33fdb1f-6ef8-479e-aa72-0f14af285ad7","Type":"ContainerStarted","Data":"a58e013e32e81f747ae687b3d245f03c3c1c9a50e3250891630b2c35ad5d9dbe"} Dec 09 14:45:42 crc kubenswrapper[4770]: I1209 14:45:41.973788 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-769fn" event={"ID":"95983696-25bb-4b13-8a96-59b7af59dda4","Type":"ContainerStarted","Data":"477fa24386f643456fdd78354acfefb04810e309bfebb231cf8cbb79373c4233"} Dec 09 14:45:42 crc kubenswrapper[4770]: I1209 14:45:41.978742 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"2ef1868f614339cfbf11152d9e24e435391534478ff8a34abaeafd6cc2e87d50"} Dec 09 14:45:42 crc kubenswrapper[4770]: I1209 14:45:41.978785 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"68f57615251bfa5d34e36c6bf100c3bc22ee0923cc29c9d25458765fba1892d0"} Dec 09 14:45:42 crc kubenswrapper[4770]: I1209 14:45:41.979990 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2bfb-account-create-update-jsvdf" event={"ID":"508799c0-ff24-43d1-b131-cf60b96facfd","Type":"ContainerStarted","Data":"9c726e6dffae48ea21596d819f6bec33b310bf2e49c1fa454594506f8909285d"} Dec 09 14:45:42 crc kubenswrapper[4770]: I1209 14:45:42.789087 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a98f-account-create-update-8gx6q"] Dec 09 14:45:42 crc kubenswrapper[4770]: I1209 14:45:42.799338 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-48e7-account-create-update-lwm9s"] Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.005458 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-xcbxz"] Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.011375 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"a840bd6ee1731443d0900d21c2778393a0dea3e49e754fa1441974a313d359c5"} Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.011424 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"5a8af53b762d833d6f84129990120c18000b693de7aa324b87273af877864ea6"} Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.014263 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" event={"ID":"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad","Type":"ContainerStarted","Data":"a53d5c3743031e5fba4cbcba82072adbacf166c42a369ef4046274dffe377ec6"} Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.015919 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9qxdb"] Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.016401 4770 generic.go:334] "Generic (PLEG): container finished" podID="508799c0-ff24-43d1-b131-cf60b96facfd" containerID="0ea7c794b898b63759d7c43611eacc61fd1a73a1c6ef28fb65e6aa042e98b1c5" exitCode=0 Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.016458 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2bfb-account-create-update-jsvdf" event={"ID":"508799c0-ff24-43d1-b131-cf60b96facfd","Type":"ContainerDied","Data":"0ea7c794b898b63759d7c43611eacc61fd1a73a1c6ef28fb65e6aa042e98b1c5"} Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.020843 4770 generic.go:334] "Generic (PLEG): container finished" podID="a5a1feb7-615c-43ec-ad5b-4691c3a42f96" containerID="abb6e4645de1352b819fd286d6b304c6be978d6421f4a58b87cb3e5a2e2dee2a" exitCode=0 Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.020914 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-slxb5-config-l48f2" event={"ID":"a5a1feb7-615c-43ec-ad5b-4691c3a42f96","Type":"ContainerDied","Data":"abb6e4645de1352b819fd286d6b304c6be978d6421f4a58b87cb3e5a2e2dee2a"} Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.023426 4770 generic.go:334] "Generic (PLEG): container finished" podID="c33fdb1f-6ef8-479e-aa72-0f14af285ad7" containerID="3088c278894c834a21b82b2c59e3c57378aa4f64c913bf334ee5b2744bbe026b" exitCode=0 Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.023502 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ckkhz" event={"ID":"c33fdb1f-6ef8-479e-aa72-0f14af285ad7","Type":"ContainerDied","Data":"3088c278894c834a21b82b2c59e3c57378aa4f64c913bf334ee5b2744bbe026b"} Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.024455 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8111-account-create-update-9hcm5"] Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.025380 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a98f-account-create-update-8gx6q" event={"ID":"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f","Type":"ContainerStarted","Data":"df9a09b5cbc5a9fbf12373100ae6c6ee57f15a07229c603a4f4468a0caeca442"} Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.032130 4770 generic.go:334] "Generic (PLEG): container finished" podID="95983696-25bb-4b13-8a96-59b7af59dda4" containerID="d962e989cadf609eada02d4d1fee0ad48c84499a9e4a0afddae583a9363f440f" exitCode=0 Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.032185 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-769fn" event={"ID":"95983696-25bb-4b13-8a96-59b7af59dda4","Type":"ContainerDied","Data":"d962e989cadf609eada02d4d1fee0ad48c84499a9e4a0afddae583a9363f440f"} Dec 09 14:45:43 crc kubenswrapper[4770]: I1209 14:45:43.040586 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xn76l"] Dec 09 14:45:43 crc kubenswrapper[4770]: W1209 14:45:43.047230 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23bb07cc_d85a_4cbd_a8ed_429d01349e74.slice/crio-02fc889b1df3b74a9be2522bd9ae394e29f641c91c19d9e5e7e08b114dff7449 WatchSource:0}: Error finding container 02fc889b1df3b74a9be2522bd9ae394e29f641c91c19d9e5e7e08b114dff7449: Status 404 returned error can't find the container with id 02fc889b1df3b74a9be2522bd9ae394e29f641c91c19d9e5e7e08b114dff7449 Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.043207 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec87ce87-2276-4c01-bf53-60236cda1e26" containerID="bbde407fdce04eed1e604f67c5e300a6ffa78359d7698b7a0cae2c1b5cf47e04" exitCode=0 Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.043579 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8111-account-create-update-9hcm5" event={"ID":"ec87ce87-2276-4c01-bf53-60236cda1e26","Type":"ContainerDied","Data":"bbde407fdce04eed1e604f67c5e300a6ffa78359d7698b7a0cae2c1b5cf47e04"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.044156 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8111-account-create-update-9hcm5" event={"ID":"ec87ce87-2276-4c01-bf53-60236cda1e26","Type":"ContainerStarted","Data":"9cca7a9e205491656ac9a4170a9d99fc5f70f6fa9af27f74658a62a8205bf585"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.046166 4770 generic.go:334] "Generic (PLEG): container finished" podID="39626459-4da7-4aae-b9b4-187a18cd1c3e" containerID="8e3925ed987f379c889a2f874a4f12381d8e0c385c89e8d5317e39e3cbc9c33d" exitCode=0 Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.046228 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9qxdb" event={"ID":"39626459-4da7-4aae-b9b4-187a18cd1c3e","Type":"ContainerDied","Data":"8e3925ed987f379c889a2f874a4f12381d8e0c385c89e8d5317e39e3cbc9c33d"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.046256 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9qxdb" event={"ID":"39626459-4da7-4aae-b9b4-187a18cd1c3e","Type":"ContainerStarted","Data":"65ccfb8405e578266bd50b2d2bc59bb2fd87e39c3a55a26feb106b6ed82e9052"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.047614 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xn76l" event={"ID":"a0ed519f-b86b-4baf-a731-ffe46bc15641","Type":"ContainerStarted","Data":"3849be864bec3e3f2adc08f8fd1c937cba5de8b1bae66f3ffaed29f42c952ec1"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.049741 4770 generic.go:334] "Generic (PLEG): container finished" podID="7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad" containerID="6e5142a8d1165bca5f6522ba9d0ac1c51ef242a5bf00dbef9bddf57700453021" exitCode=0 Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.049850 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" event={"ID":"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad","Type":"ContainerDied","Data":"6e5142a8d1165bca5f6522ba9d0ac1c51ef242a5bf00dbef9bddf57700453021"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.051633 4770 generic.go:334] "Generic (PLEG): container finished" podID="cf1a70f2-cfcc-40ad-bbd4-6f307928b20f" containerID="7f4d514f5a6cf63cd7a85ded2ca0593e944d1664a01f9e5a0655b901f7e378ad" exitCode=0 Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.051695 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a98f-account-create-update-8gx6q" event={"ID":"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f","Type":"ContainerDied","Data":"7f4d514f5a6cf63cd7a85ded2ca0593e944d1664a01f9e5a0655b901f7e378ad"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.053645 4770 generic.go:334] "Generic (PLEG): container finished" podID="23bb07cc-d85a-4cbd-a8ed-429d01349e74" containerID="6a68f4d56e77125cce1fe7b05e9971c80fc5ff934d4f6bc8be393b4185728934" exitCode=0 Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.053676 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-xcbxz" event={"ID":"23bb07cc-d85a-4cbd-a8ed-429d01349e74","Type":"ContainerDied","Data":"6a68f4d56e77125cce1fe7b05e9971c80fc5ff934d4f6bc8be393b4185728934"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.053770 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-xcbxz" event={"ID":"23bb07cc-d85a-4cbd-a8ed-429d01349e74","Type":"ContainerStarted","Data":"02fc889b1df3b74a9be2522bd9ae394e29f641c91c19d9e5e7e08b114dff7449"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.069682 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"f12fe971aa3091fdc23fe2a745281a85d99413ac5b3a51ac56765d0e758769be"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.069750 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cbc15e71-9605-466b-8947-aa2ca716bc2d","Type":"ContainerStarted","Data":"1dad5d6e61148844ff4e03323a745a94293a62a728c49e8a43d313e4be5a757c"} Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.119337 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-slxb5" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.238635 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=42.061973832 podStartE2EDuration="50.238617601s" podCreationTimestamp="2025-12-09 14:44:54 +0000 UTC" firstStartedPulling="2025-12-09 14:45:31.744061411 +0000 UTC m=+1363.640263547" lastFinishedPulling="2025-12-09 14:45:39.92070518 +0000 UTC m=+1371.816907316" observedRunningTime="2025-12-09 14:45:44.236277648 +0000 UTC m=+1376.132479804" watchObservedRunningTime="2025-12-09 14:45:44.238617601 +0000 UTC m=+1376.134819737" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.266180 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.266629 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.531184 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.583990 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-djf6f"] Dec 09 14:45:44 crc kubenswrapper[4770]: E1209 14:45:44.584569 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508799c0-ff24-43d1-b131-cf60b96facfd" containerName="mariadb-account-create-update" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.584585 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="508799c0-ff24-43d1-b131-cf60b96facfd" containerName="mariadb-account-create-update" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.589063 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="508799c0-ff24-43d1-b131-cf60b96facfd" containerName="mariadb-account-create-update" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.590699 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.594224 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.622357 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508799c0-ff24-43d1-b131-cf60b96facfd-operator-scripts\") pod \"508799c0-ff24-43d1-b131-cf60b96facfd\" (UID: \"508799c0-ff24-43d1-b131-cf60b96facfd\") " Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.622555 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9grkc\" (UniqueName: \"kubernetes.io/projected/508799c0-ff24-43d1-b131-cf60b96facfd-kube-api-access-9grkc\") pod \"508799c0-ff24-43d1-b131-cf60b96facfd\" (UID: \"508799c0-ff24-43d1-b131-cf60b96facfd\") " Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.622852 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.622930 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-config\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.622966 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.622986 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhpn\" (UniqueName: \"kubernetes.io/projected/d22da327-1d9a-49b8-b67a-317118888295-kube-api-access-lvhpn\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.623180 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.623218 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.623888 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/508799c0-ff24-43d1-b131-cf60b96facfd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "508799c0-ff24-43d1-b131-cf60b96facfd" (UID: "508799c0-ff24-43d1-b131-cf60b96facfd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.725153 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.725204 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.725287 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.725355 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-config\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.725382 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.725409 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhpn\" (UniqueName: \"kubernetes.io/projected/d22da327-1d9a-49b8-b67a-317118888295-kube-api-access-lvhpn\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.725495 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508799c0-ff24-43d1-b131-cf60b96facfd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.729994 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.730118 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-config\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.730975 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/508799c0-ff24-43d1-b131-cf60b96facfd-kube-api-access-9grkc" (OuterVolumeSpecName: "kube-api-access-9grkc") pod "508799c0-ff24-43d1-b131-cf60b96facfd" (UID: "508799c0-ff24-43d1-b131-cf60b96facfd"). InnerVolumeSpecName "kube-api-access-9grkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.736544 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.736683 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.742541 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.746956 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhpn\" (UniqueName: \"kubernetes.io/projected/d22da327-1d9a-49b8-b67a-317118888295-kube-api-access-lvhpn\") pod \"dnsmasq-dns-5c79d794d7-djf6f\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.830910 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.835335 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9grkc\" (UniqueName: \"kubernetes.io/projected/508799c0-ff24-43d1-b131-cf60b96facfd-kube-api-access-9grkc\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.856499 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-djf6f"] Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.864268 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-769fn" Dec 09 14:45:44 crc kubenswrapper[4770]: I1209 14:45:44.874929 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.003235 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037571 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snkgm\" (UniqueName: \"kubernetes.io/projected/95983696-25bb-4b13-8a96-59b7af59dda4-kube-api-access-snkgm\") pod \"95983696-25bb-4b13-8a96-59b7af59dda4\" (UID: \"95983696-25bb-4b13-8a96-59b7af59dda4\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037642 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-additional-scripts\") pod \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037667 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-log-ovn\") pod \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037704 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znwjj\" (UniqueName: \"kubernetes.io/projected/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-kube-api-access-znwjj\") pod \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037787 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a5a1feb7-615c-43ec-ad5b-4691c3a42f96" (UID: "a5a1feb7-615c-43ec-ad5b-4691c3a42f96"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037806 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run\") pod \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037851 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run-ovn\") pod \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037903 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95983696-25bb-4b13-8a96-59b7af59dda4-operator-scripts\") pod \"95983696-25bb-4b13-8a96-59b7af59dda4\" (UID: \"95983696-25bb-4b13-8a96-59b7af59dda4\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.037969 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-scripts\") pod \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\" (UID: \"a5a1feb7-615c-43ec-ad5b-4691c3a42f96\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.038103 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a5a1feb7-615c-43ec-ad5b-4691c3a42f96" (UID: "a5a1feb7-615c-43ec-ad5b-4691c3a42f96"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.038160 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run" (OuterVolumeSpecName: "var-run") pod "a5a1feb7-615c-43ec-ad5b-4691c3a42f96" (UID: "a5a1feb7-615c-43ec-ad5b-4691c3a42f96"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.038551 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95983696-25bb-4b13-8a96-59b7af59dda4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95983696-25bb-4b13-8a96-59b7af59dda4" (UID: "95983696-25bb-4b13-8a96-59b7af59dda4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.038662 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a5a1feb7-615c-43ec-ad5b-4691c3a42f96" (UID: "a5a1feb7-615c-43ec-ad5b-4691c3a42f96"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.039060 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-scripts" (OuterVolumeSpecName: "scripts") pod "a5a1feb7-615c-43ec-ad5b-4691c3a42f96" (UID: "a5a1feb7-615c-43ec-ad5b-4691c3a42f96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.042142 4770 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.042171 4770 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.042184 4770 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.042195 4770 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.042207 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95983696-25bb-4b13-8a96-59b7af59dda4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.042219 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.044298 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-kube-api-access-znwjj" (OuterVolumeSpecName: "kube-api-access-znwjj") pod "a5a1feb7-615c-43ec-ad5b-4691c3a42f96" (UID: "a5a1feb7-615c-43ec-ad5b-4691c3a42f96"). InnerVolumeSpecName "kube-api-access-znwjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.045975 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95983696-25bb-4b13-8a96-59b7af59dda4-kube-api-access-snkgm" (OuterVolumeSpecName: "kube-api-access-snkgm") pod "95983696-25bb-4b13-8a96-59b7af59dda4" (UID: "95983696-25bb-4b13-8a96-59b7af59dda4"). InnerVolumeSpecName "kube-api-access-snkgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.079495 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-769fn" event={"ID":"95983696-25bb-4b13-8a96-59b7af59dda4","Type":"ContainerDied","Data":"477fa24386f643456fdd78354acfefb04810e309bfebb231cf8cbb79373c4233"} Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.079534 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477fa24386f643456fdd78354acfefb04810e309bfebb231cf8cbb79373c4233" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.079584 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-769fn" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.090288 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2bfb-account-create-update-jsvdf" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.090879 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2bfb-account-create-update-jsvdf" event={"ID":"508799c0-ff24-43d1-b131-cf60b96facfd","Type":"ContainerDied","Data":"9c726e6dffae48ea21596d819f6bec33b310bf2e49c1fa454594506f8909285d"} Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.090916 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c726e6dffae48ea21596d819f6bec33b310bf2e49c1fa454594506f8909285d" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.092983 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-slxb5-config-l48f2" event={"ID":"a5a1feb7-615c-43ec-ad5b-4691c3a42f96","Type":"ContainerDied","Data":"2b2a0fa68e0e3c3d21c69f5a5971c33b15d1f115c9f0030768bd1ab4116eda19"} Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.093033 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b2a0fa68e0e3c3d21c69f5a5971c33b15d1f115c9f0030768bd1ab4116eda19" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.093100 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5-config-l48f2" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.107074 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ckkhz" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.107674 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ckkhz" event={"ID":"c33fdb1f-6ef8-479e-aa72-0f14af285ad7","Type":"ContainerDied","Data":"a58e013e32e81f747ae687b3d245f03c3c1c9a50e3250891630b2c35ad5d9dbe"} Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.107718 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58e013e32e81f747ae687b3d245f03c3c1c9a50e3250891630b2c35ad5d9dbe" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.157552 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq62p\" (UniqueName: \"kubernetes.io/projected/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-kube-api-access-bq62p\") pod \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\" (UID: \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.158738 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-operator-scripts\") pod \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\" (UID: \"c33fdb1f-6ef8-479e-aa72-0f14af285ad7\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.159387 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snkgm\" (UniqueName: \"kubernetes.io/projected/95983696-25bb-4b13-8a96-59b7af59dda4-kube-api-access-snkgm\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.159404 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znwjj\" (UniqueName: \"kubernetes.io/projected/a5a1feb7-615c-43ec-ad5b-4691c3a42f96-kube-api-access-znwjj\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.160997 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c33fdb1f-6ef8-479e-aa72-0f14af285ad7" (UID: "c33fdb1f-6ef8-479e-aa72-0f14af285ad7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.162931 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-kube-api-access-bq62p" (OuterVolumeSpecName: "kube-api-access-bq62p") pod "c33fdb1f-6ef8-479e-aa72-0f14af285ad7" (UID: "c33fdb1f-6ef8-479e-aa72-0f14af285ad7"). InnerVolumeSpecName "kube-api-access-bq62p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.261083 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq62p\" (UniqueName: \"kubernetes.io/projected/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-kube-api-access-bq62p\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.261123 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33fdb1f-6ef8-479e-aa72-0f14af285ad7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.366031 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-djf6f"] Dec 09 14:45:45 crc kubenswrapper[4770]: W1209 14:45:45.380507 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd22da327_1d9a_49b8_b67a_317118888295.slice/crio-c7a278a2b622d4cc8d31f3640251f97bfbce87700873214677d418e006cb5c39 WatchSource:0}: Error finding container c7a278a2b622d4cc8d31f3640251f97bfbce87700873214677d418e006cb5c39: Status 404 returned error can't find the container with id c7a278a2b622d4cc8d31f3640251f97bfbce87700873214677d418e006cb5c39 Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.524549 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.542866 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.668207 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstrh\" (UniqueName: \"kubernetes.io/projected/ec87ce87-2276-4c01-bf53-60236cda1e26-kube-api-access-bstrh\") pod \"ec87ce87-2276-4c01-bf53-60236cda1e26\" (UID: \"ec87ce87-2276-4c01-bf53-60236cda1e26\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.668291 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5zmr\" (UniqueName: \"kubernetes.io/projected/23bb07cc-d85a-4cbd-a8ed-429d01349e74-kube-api-access-s5zmr\") pod \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\" (UID: \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.668390 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec87ce87-2276-4c01-bf53-60236cda1e26-operator-scripts\") pod \"ec87ce87-2276-4c01-bf53-60236cda1e26\" (UID: \"ec87ce87-2276-4c01-bf53-60236cda1e26\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.668513 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23bb07cc-d85a-4cbd-a8ed-429d01349e74-operator-scripts\") pod \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\" (UID: \"23bb07cc-d85a-4cbd-a8ed-429d01349e74\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.670365 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec87ce87-2276-4c01-bf53-60236cda1e26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec87ce87-2276-4c01-bf53-60236cda1e26" (UID: "ec87ce87-2276-4c01-bf53-60236cda1e26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.680040 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec87ce87-2276-4c01-bf53-60236cda1e26-kube-api-access-bstrh" (OuterVolumeSpecName: "kube-api-access-bstrh") pod "ec87ce87-2276-4c01-bf53-60236cda1e26" (UID: "ec87ce87-2276-4c01-bf53-60236cda1e26"). InnerVolumeSpecName "kube-api-access-bstrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.683917 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23bb07cc-d85a-4cbd-a8ed-429d01349e74-kube-api-access-s5zmr" (OuterVolumeSpecName: "kube-api-access-s5zmr") pod "23bb07cc-d85a-4cbd-a8ed-429d01349e74" (UID: "23bb07cc-d85a-4cbd-a8ed-429d01349e74"). InnerVolumeSpecName "kube-api-access-s5zmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.684977 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23bb07cc-d85a-4cbd-a8ed-429d01349e74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23bb07cc-d85a-4cbd-a8ed-429d01349e74" (UID: "23bb07cc-d85a-4cbd-a8ed-429d01349e74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.773928 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23bb07cc-d85a-4cbd-a8ed-429d01349e74-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.773966 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bstrh\" (UniqueName: \"kubernetes.io/projected/ec87ce87-2276-4c01-bf53-60236cda1e26-kube-api-access-bstrh\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.773979 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5zmr\" (UniqueName: \"kubernetes.io/projected/23bb07cc-d85a-4cbd-a8ed-429d01349e74-kube-api-access-s5zmr\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.773991 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec87ce87-2276-4c01-bf53-60236cda1e26-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.805971 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.807325 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.814672 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.967079 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.978701 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzwhm\" (UniqueName: \"kubernetes.io/projected/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-kube-api-access-fzwhm\") pod \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\" (UID: \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.978816 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-operator-scripts\") pod \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\" (UID: \"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f\") " Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.980026 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf1a70f2-cfcc-40ad-bbd4-6f307928b20f" (UID: "cf1a70f2-cfcc-40ad-bbd4-6f307928b20f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:45 crc kubenswrapper[4770]: I1209 14:45:45.986666 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.004436 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-kube-api-access-fzwhm" (OuterVolumeSpecName: "kube-api-access-fzwhm") pod "cf1a70f2-cfcc-40ad-bbd4-6f307928b20f" (UID: "cf1a70f2-cfcc-40ad-bbd4-6f307928b20f"). InnerVolumeSpecName "kube-api-access-fzwhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.038174 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-slxb5-config-l48f2"] Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.047967 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-slxb5-config-l48f2"] Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.099446 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-operator-scripts\") pod \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\" (UID: \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\") " Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.099824 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgth2\" (UniqueName: \"kubernetes.io/projected/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-kube-api-access-mgth2\") pod \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\" (UID: \"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad\") " Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.099852 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxnnm\" (UniqueName: \"kubernetes.io/projected/39626459-4da7-4aae-b9b4-187a18cd1c3e-kube-api-access-lxnnm\") pod \"39626459-4da7-4aae-b9b4-187a18cd1c3e\" (UID: \"39626459-4da7-4aae-b9b4-187a18cd1c3e\") " Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.099946 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39626459-4da7-4aae-b9b4-187a18cd1c3e-operator-scripts\") pod \"39626459-4da7-4aae-b9b4-187a18cd1c3e\" (UID: \"39626459-4da7-4aae-b9b4-187a18cd1c3e\") " Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.100488 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzwhm\" (UniqueName: \"kubernetes.io/projected/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-kube-api-access-fzwhm\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.100505 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.100591 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad" (UID: "7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.100851 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39626459-4da7-4aae-b9b4-187a18cd1c3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39626459-4da7-4aae-b9b4-187a18cd1c3e" (UID: "39626459-4da7-4aae-b9b4-187a18cd1c3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.107508 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-kube-api-access-mgth2" (OuterVolumeSpecName: "kube-api-access-mgth2") pod "7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad" (UID: "7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad"). InnerVolumeSpecName "kube-api-access-mgth2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.147428 4770 generic.go:334] "Generic (PLEG): container finished" podID="d22da327-1d9a-49b8-b67a-317118888295" containerID="4af8e55aac0150aed98cb7ebe057976bb13db3f3f9a9d089763a5cc104f205dd" exitCode=0 Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.147530 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" event={"ID":"d22da327-1d9a-49b8-b67a-317118888295","Type":"ContainerDied","Data":"4af8e55aac0150aed98cb7ebe057976bb13db3f3f9a9d089763a5cc104f205dd"} Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.147563 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" event={"ID":"d22da327-1d9a-49b8-b67a-317118888295","Type":"ContainerStarted","Data":"c7a278a2b622d4cc8d31f3640251f97bfbce87700873214677d418e006cb5c39"} Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.150428 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a98f-account-create-update-8gx6q" event={"ID":"cf1a70f2-cfcc-40ad-bbd4-6f307928b20f","Type":"ContainerDied","Data":"df9a09b5cbc5a9fbf12373100ae6c6ee57f15a07229c603a4f4468a0caeca442"} Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.150486 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df9a09b5cbc5a9fbf12373100ae6c6ee57f15a07229c603a4f4468a0caeca442" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.150576 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a98f-account-create-update-8gx6q" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.150831 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-slxb5-config-kmnrd"] Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.151025 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39626459-4da7-4aae-b9b4-187a18cd1c3e-kube-api-access-lxnnm" (OuterVolumeSpecName: "kube-api-access-lxnnm") pod "39626459-4da7-4aae-b9b4-187a18cd1c3e" (UID: "39626459-4da7-4aae-b9b4-187a18cd1c3e"). InnerVolumeSpecName "kube-api-access-lxnnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:46 crc kubenswrapper[4770]: E1209 14:45:46.161461 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a1feb7-615c-43ec-ad5b-4691c3a42f96" containerName="ovn-config" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.161498 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a1feb7-615c-43ec-ad5b-4691c3a42f96" containerName="ovn-config" Dec 09 14:45:46 crc kubenswrapper[4770]: E1209 14:45:46.161523 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95983696-25bb-4b13-8a96-59b7af59dda4" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.161533 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="95983696-25bb-4b13-8a96-59b7af59dda4" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: E1209 14:45:46.161552 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39626459-4da7-4aae-b9b4-187a18cd1c3e" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.161562 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="39626459-4da7-4aae-b9b4-187a18cd1c3e" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: E1209 14:45:46.161590 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23bb07cc-d85a-4cbd-a8ed-429d01349e74" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.161598 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="23bb07cc-d85a-4cbd-a8ed-429d01349e74" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: E1209 14:45:46.161619 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec87ce87-2276-4c01-bf53-60236cda1e26" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.161628 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec87ce87-2276-4c01-bf53-60236cda1e26" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: E1209 14:45:46.161673 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1a70f2-cfcc-40ad-bbd4-6f307928b20f" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.161681 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1a70f2-cfcc-40ad-bbd4-6f307928b20f" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: E1209 14:45:46.161711 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33fdb1f-6ef8-479e-aa72-0f14af285ad7" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.161718 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33fdb1f-6ef8-479e-aa72-0f14af285ad7" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: E1209 14:45:46.161742 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.161749 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.162291 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="39626459-4da7-4aae-b9b4-187a18cd1c3e" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.162322 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="23bb07cc-d85a-4cbd-a8ed-429d01349e74" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.162348 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec87ce87-2276-4c01-bf53-60236cda1e26" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.162373 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.162383 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="95983696-25bb-4b13-8a96-59b7af59dda4" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.162407 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a1feb7-615c-43ec-ad5b-4691c3a42f96" containerName="ovn-config" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.162436 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1a70f2-cfcc-40ad-bbd4-6f307928b20f" containerName="mariadb-account-create-update" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.162454 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33fdb1f-6ef8-479e-aa72-0f14af285ad7" containerName="mariadb-database-create" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.163544 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.164048 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-xcbxz" event={"ID":"23bb07cc-d85a-4cbd-a8ed-429d01349e74","Type":"ContainerDied","Data":"02fc889b1df3b74a9be2522bd9ae394e29f641c91c19d9e5e7e08b114dff7449"} Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.164087 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02fc889b1df3b74a9be2522bd9ae394e29f641c91c19d9e5e7e08b114dff7449" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.164196 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-xcbxz" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.166365 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.168326 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8111-account-create-update-9hcm5" event={"ID":"ec87ce87-2276-4c01-bf53-60236cda1e26","Type":"ContainerDied","Data":"9cca7a9e205491656ac9a4170a9d99fc5f70f6fa9af27f74658a62a8205bf585"} Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.168384 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cca7a9e205491656ac9a4170a9d99fc5f70f6fa9af27f74658a62a8205bf585" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.168354 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8111-account-create-update-9hcm5" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.172715 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9qxdb" event={"ID":"39626459-4da7-4aae-b9b4-187a18cd1c3e","Type":"ContainerDied","Data":"65ccfb8405e578266bd50b2d2bc59bb2fd87e39c3a55a26feb106b6ed82e9052"} Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.172789 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65ccfb8405e578266bd50b2d2bc59bb2fd87e39c3a55a26feb106b6ed82e9052" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.172908 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9qxdb" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.178179 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.180386 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-48e7-account-create-update-lwm9s" event={"ID":"7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad","Type":"ContainerDied","Data":"a53d5c3743031e5fba4cbcba82072adbacf166c42a369ef4046274dffe377ec6"} Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.180430 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a53d5c3743031e5fba4cbcba82072adbacf166c42a369ef4046274dffe377ec6" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.185914 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.195371 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-slxb5-config-kmnrd"] Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.205780 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39626459-4da7-4aae-b9b4-187a18cd1c3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.206136 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.206271 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgth2\" (UniqueName: \"kubernetes.io/projected/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad-kube-api-access-mgth2\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.206352 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxnnm\" (UniqueName: \"kubernetes.io/projected/39626459-4da7-4aae-b9b4-187a18cd1c3e-kube-api-access-lxnnm\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.308219 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run-ovn\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.308321 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-log-ovn\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.308386 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-scripts\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.308435 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-additional-scripts\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.308476 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr59n\" (UniqueName: \"kubernetes.io/projected/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-kube-api-access-cr59n\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.308581 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.410303 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run-ovn\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.410370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-log-ovn\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.410401 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-scripts\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.410430 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-additional-scripts\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.410448 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr59n\" (UniqueName: \"kubernetes.io/projected/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-kube-api-access-cr59n\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.410520 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.410923 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.410997 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run-ovn\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.411036 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-log-ovn\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.414174 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-additional-scripts\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.416317 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-scripts\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.430379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr59n\" (UniqueName: \"kubernetes.io/projected/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-kube-api-access-cr59n\") pod \"ovn-controller-slxb5-config-kmnrd\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.590061 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.593268 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:45:46 crc kubenswrapper[4770]: I1209 14:45:46.618415 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a1feb7-615c-43ec-ad5b-4691c3a42f96" path="/var/lib/kubelet/pods/a5a1feb7-615c-43ec-ad5b-4691c3a42f96/volumes" Dec 09 14:45:47 crc kubenswrapper[4770]: I1209 14:45:47.986837 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="67dab40a-3d7c-4737-bca9-28dc6280071c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 14:45:48 crc kubenswrapper[4770]: I1209 14:45:48.698532 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:45:48 crc kubenswrapper[4770]: I1209 14:45:48.698883 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="prometheus" containerID="cri-o://c57f67cb4699c1e99bf8bbd21e94cdb66f7a76230afe74af9f578b7723910ef6" gracePeriod=600 Dec 09 14:45:48 crc kubenswrapper[4770]: I1209 14:45:48.699523 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="thanos-sidecar" containerID="cri-o://e2e9e0386cf941da98edf8f7eb32466b1956050d3cd236961bfd89f1f44bce35" gracePeriod=600 Dec 09 14:45:48 crc kubenswrapper[4770]: I1209 14:45:48.699564 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="config-reloader" containerID="cri-o://17d4127ceba499a5f6ddf6af23af2490c8233cbed8319b50eda60155e15c77c6" gracePeriod=600 Dec 09 14:45:49 crc kubenswrapper[4770]: I1209 14:45:49.217871 4770 generic.go:334] "Generic (PLEG): container finished" podID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerID="e2e9e0386cf941da98edf8f7eb32466b1956050d3cd236961bfd89f1f44bce35" exitCode=0 Dec 09 14:45:49 crc kubenswrapper[4770]: I1209 14:45:49.218120 4770 generic.go:334] "Generic (PLEG): container finished" podID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerID="17d4127ceba499a5f6ddf6af23af2490c8233cbed8319b50eda60155e15c77c6" exitCode=0 Dec 09 14:45:49 crc kubenswrapper[4770]: I1209 14:45:49.218129 4770 generic.go:334] "Generic (PLEG): container finished" podID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerID="c57f67cb4699c1e99bf8bbd21e94cdb66f7a76230afe74af9f578b7723910ef6" exitCode=0 Dec 09 14:45:49 crc kubenswrapper[4770]: I1209 14:45:49.217952 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerDied","Data":"e2e9e0386cf941da98edf8f7eb32466b1956050d3cd236961bfd89f1f44bce35"} Dec 09 14:45:49 crc kubenswrapper[4770]: I1209 14:45:49.218156 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerDied","Data":"17d4127ceba499a5f6ddf6af23af2490c8233cbed8319b50eda60155e15c77c6"} Dec 09 14:45:49 crc kubenswrapper[4770]: I1209 14:45:49.218166 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerDied","Data":"c57f67cb4699c1e99bf8bbd21e94cdb66f7a76230afe74af9f578b7723910ef6"} Dec 09 14:45:49 crc kubenswrapper[4770]: I1209 14:45:49.643932 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:45:50 crc kubenswrapper[4770]: I1209 14:45:50.788922 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.585890 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.645140 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-tls-assets\") pod \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.645188 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-prometheus-metric-storage-rulefiles-0\") pod \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.645216 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-thanos-prometheus-http-client-file\") pod \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.645250 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-web-config\") pod \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.645473 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.645505 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config-out\") pod \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.645541 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b9m7\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-kube-api-access-4b9m7\") pod \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.645566 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config\") pod \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\" (UID: \"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0\") " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.649874 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config" (OuterVolumeSpecName: "config") pod "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" (UID: "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.653800 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" (UID: "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.654520 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" (UID: "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.658417 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" (UID: "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.664856 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-kube-api-access-4b9m7" (OuterVolumeSpecName: "kube-api-access-4b9m7") pod "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" (UID: "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0"). InnerVolumeSpecName "kube-api-access-4b9m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.668405 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config-out" (OuterVolumeSpecName: "config-out") pod "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" (UID: "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.695398 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-web-config" (OuterVolumeSpecName: "web-config") pod "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" (UID: "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.712033 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" (UID: "7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0"). InnerVolumeSpecName "pvc-ff403788-e2d7-4f85-8261-191f5e36e620". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.764023 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") on node \"crc\" " Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.764073 4770 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config-out\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.764094 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b9m7\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-kube-api-access-4b9m7\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.764104 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.764112 4770 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.764121 4770 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.764151 4770 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.764161 4770 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0-web-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.882396 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-slxb5-config-kmnrd"] Dec 09 14:45:51 crc kubenswrapper[4770]: W1209 14:45:51.889323 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b71afd_5cdb_430e_b24f_1b7ee0fd8e22.slice/crio-0c9c98517170b1d96927798369fcd99c0767e355bb5fa139953ac15aee29e01f WatchSource:0}: Error finding container 0c9c98517170b1d96927798369fcd99c0767e355bb5fa139953ac15aee29e01f: Status 404 returned error can't find the container with id 0c9c98517170b1d96927798369fcd99c0767e355bb5fa139953ac15aee29e01f Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.896071 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.896472 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ff403788-e2d7-4f85-8261-191f5e36e620" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620") on node "crc" Dec 09 14:45:51 crc kubenswrapper[4770]: I1209 14:45:51.968363 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.252167 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xn76l" event={"ID":"a0ed519f-b86b-4baf-a731-ffe46bc15641","Type":"ContainerStarted","Data":"5b85f319e49cb8e18272397cdb391b79af9b7da060e009d5224d66ddeab41df0"} Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.254463 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-slxb5-config-kmnrd" event={"ID":"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22","Type":"ContainerStarted","Data":"8ae6be449115a496f5b04e750b3f4f29652f08349b285fe4043bc2c846adf369"} Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.254510 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-slxb5-config-kmnrd" event={"ID":"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22","Type":"ContainerStarted","Data":"0c9c98517170b1d96927798369fcd99c0767e355bb5fa139953ac15aee29e01f"} Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.258321 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" event={"ID":"d22da327-1d9a-49b8-b67a-317118888295","Type":"ContainerStarted","Data":"07d3e5b68dd279277808cdf199c174824e85036bb88967c64a98b3b292722743"} Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.258479 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.261885 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0","Type":"ContainerDied","Data":"4c4c97ac27dfce5849cb07be02511bbf4b31f4c0c5c7c82d5d18b4e308004e77"} Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.261947 4770 scope.go:117] "RemoveContainer" containerID="e2e9e0386cf941da98edf8f7eb32466b1956050d3cd236961bfd89f1f44bce35" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.262103 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.264612 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gbddk" event={"ID":"b0c0c709-cb21-49e0-ba23-211f0cd1749d","Type":"ContainerStarted","Data":"53a17b5b76555580cebf00cc3ccdfccc549095929ee984660b7daf6437f7f746"} Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.281781 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xn76l" podStartSLOduration=4.006451041 podStartE2EDuration="12.281755274s" podCreationTimestamp="2025-12-09 14:45:40 +0000 UTC" firstStartedPulling="2025-12-09 14:45:43.038695437 +0000 UTC m=+1374.934897563" lastFinishedPulling="2025-12-09 14:45:51.31399966 +0000 UTC m=+1383.210201796" observedRunningTime="2025-12-09 14:45:52.272091106 +0000 UTC m=+1384.168293242" watchObservedRunningTime="2025-12-09 14:45:52.281755274 +0000 UTC m=+1384.177957410" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.300379 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-slxb5-config-kmnrd" podStartSLOduration=6.300363301 podStartE2EDuration="6.300363301s" podCreationTimestamp="2025-12-09 14:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:52.294774212 +0000 UTC m=+1384.190976358" watchObservedRunningTime="2025-12-09 14:45:52.300363301 +0000 UTC m=+1384.196565427" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.302100 4770 scope.go:117] "RemoveContainer" containerID="17d4127ceba499a5f6ddf6af23af2490c8233cbed8319b50eda60155e15c77c6" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.324707 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gbddk" podStartSLOduration=4.502506051 podStartE2EDuration="40.324685001s" podCreationTimestamp="2025-12-09 14:45:12 +0000 UTC" firstStartedPulling="2025-12-09 14:45:15.493923137 +0000 UTC m=+1347.390125273" lastFinishedPulling="2025-12-09 14:45:51.316102097 +0000 UTC m=+1383.212304223" observedRunningTime="2025-12-09 14:45:52.314567901 +0000 UTC m=+1384.210770037" watchObservedRunningTime="2025-12-09 14:45:52.324685001 +0000 UTC m=+1384.220887137" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.327447 4770 scope.go:117] "RemoveContainer" containerID="c57f67cb4699c1e99bf8bbd21e94cdb66f7a76230afe74af9f578b7723910ef6" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.331672 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podStartSLOduration=8.331655107 podStartE2EDuration="8.331655107s" podCreationTimestamp="2025-12-09 14:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:45:52.329222182 +0000 UTC m=+1384.225424328" watchObservedRunningTime="2025-12-09 14:45:52.331655107 +0000 UTC m=+1384.227857243" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.354629 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.354674 4770 scope.go:117] "RemoveContainer" containerID="a9a595d8db596474a74b7145ea47c3669fe97865184fa19856a0ba99f5f74215" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.368459 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.380574 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:45:52 crc kubenswrapper[4770]: E1209 14:45:52.380948 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="config-reloader" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.380967 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="config-reloader" Dec 09 14:45:52 crc kubenswrapper[4770]: E1209 14:45:52.380976 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="prometheus" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.380983 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="prometheus" Dec 09 14:45:52 crc kubenswrapper[4770]: E1209 14:45:52.381007 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="init-config-reloader" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.381013 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="init-config-reloader" Dec 09 14:45:52 crc kubenswrapper[4770]: E1209 14:45:52.381028 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="thanos-sidecar" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.381034 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="thanos-sidecar" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.381227 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="config-reloader" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.381244 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="thanos-sidecar" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.381258 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" containerName="prometheus" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.383059 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.387816 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zfw4p" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.388043 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.388618 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.388916 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.390128 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.395218 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.401140 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.409791 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.583311 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.583364 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.583464 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.583806 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.584082 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.584149 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.584208 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.584245 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.584373 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.584465 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.584577 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7pqv\" (UniqueName: \"kubernetes.io/projected/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-kube-api-access-m7pqv\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.602060 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0" path="/var/lib/kubelet/pods/7c6a1039-13ac-4d63-b0b0-3f54e7c3cee0/volumes" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.686270 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.686761 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.686790 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.686851 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.686912 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.686939 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.686994 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.687033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.687077 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7pqv\" (UniqueName: \"kubernetes.io/projected/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-kube-api-access-m7pqv\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.687118 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.687149 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.688308 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.693748 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.696049 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.697430 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.697681 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.697849 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffb0c130d32fd7a4a3dbad94b288743aa1490975c711021e1be0373426019df3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.698372 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.698859 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.701060 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.703755 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.707657 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7pqv\" (UniqueName: \"kubernetes.io/projected/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-kube-api-access-m7pqv\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.712236 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb4d3067-7df3-4aae-ad4a-7e24c480d3f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:52 crc kubenswrapper[4770]: I1209 14:45:52.758309 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff403788-e2d7-4f85-8261-191f5e36e620\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff403788-e2d7-4f85-8261-191f5e36e620\") pod \"prometheus-metric-storage-0\" (UID: \"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8\") " pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:53 crc kubenswrapper[4770]: I1209 14:45:53.050292 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 14:45:53 crc kubenswrapper[4770]: I1209 14:45:53.279124 4770 generic.go:334] "Generic (PLEG): container finished" podID="c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" containerID="8ae6be449115a496f5b04e750b3f4f29652f08349b285fe4043bc2c846adf369" exitCode=0 Dec 09 14:45:53 crc kubenswrapper[4770]: I1209 14:45:53.279790 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-slxb5-config-kmnrd" event={"ID":"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22","Type":"ContainerDied","Data":"8ae6be449115a496f5b04e750b3f4f29652f08349b285fe4043bc2c846adf369"} Dec 09 14:45:53 crc kubenswrapper[4770]: I1209 14:45:53.364869 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.293655 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8","Type":"ContainerStarted","Data":"848faf5bc73c9da89816d1d7f8a4e5bbe29f9d5827e75688a41c844b4046788c"} Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.629627 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739092 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run\") pod \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739174 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr59n\" (UniqueName: \"kubernetes.io/projected/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-kube-api-access-cr59n\") pod \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739232 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run" (OuterVolumeSpecName: "var-run") pod "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" (UID: "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739294 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run-ovn\") pod \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739323 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-scripts\") pod \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739337 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-log-ovn\") pod \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739378 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-additional-scripts\") pod \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\" (UID: \"c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22\") " Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739657 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" (UID: "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.739671 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" (UID: "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.740158 4770 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.740176 4770 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.740188 4770 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.740193 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" (UID: "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.740549 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-scripts" (OuterVolumeSpecName: "scripts") pod "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" (UID: "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.746589 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-kube-api-access-cr59n" (OuterVolumeSpecName: "kube-api-access-cr59n") pod "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" (UID: "c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22"). InnerVolumeSpecName "kube-api-access-cr59n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.841861 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr59n\" (UniqueName: \"kubernetes.io/projected/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-kube-api-access-cr59n\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.841905 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.841920 4770 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.938020 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-slxb5-config-kmnrd"] Dec 09 14:45:54 crc kubenswrapper[4770]: I1209 14:45:54.946538 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-slxb5-config-kmnrd"] Dec 09 14:45:55 crc kubenswrapper[4770]: I1209 14:45:55.304451 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c9c98517170b1d96927798369fcd99c0767e355bb5fa139953ac15aee29e01f" Dec 09 14:45:55 crc kubenswrapper[4770]: I1209 14:45:55.304513 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-slxb5-config-kmnrd" Dec 09 14:45:56 crc kubenswrapper[4770]: I1209 14:45:56.606004 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" path="/var/lib/kubelet/pods/c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22/volumes" Dec 09 14:45:57 crc kubenswrapper[4770]: I1209 14:45:57.344211 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8","Type":"ContainerStarted","Data":"f45a558d9a8c1f9c48ddb2e355d74e089883112334498c973c1c3207379ddab4"} Dec 09 14:45:57 crc kubenswrapper[4770]: I1209 14:45:57.989296 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 14:45:59 crc kubenswrapper[4770]: I1209 14:45:59.833883 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:45:59 crc kubenswrapper[4770]: I1209 14:45:59.911810 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fw9zr"] Dec 09 14:45:59 crc kubenswrapper[4770]: I1209 14:45:59.912053 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" podUID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" containerName="dnsmasq-dns" containerID="cri-o://3c35eac63a40b9ced8754d68bc84fb6212067662778f4f8b37fc4f134acfd619" gracePeriod=10 Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.399562 4770 generic.go:334] "Generic (PLEG): container finished" podID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" containerID="3c35eac63a40b9ced8754d68bc84fb6212067662778f4f8b37fc4f134acfd619" exitCode=0 Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.399606 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" event={"ID":"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d","Type":"ContainerDied","Data":"3c35eac63a40b9ced8754d68bc84fb6212067662778f4f8b37fc4f134acfd619"} Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.550110 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.679206 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsn7p\" (UniqueName: \"kubernetes.io/projected/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-kube-api-access-gsn7p\") pod \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.679348 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-sb\") pod \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.679426 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-nb\") pod \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.679513 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-dns-svc\") pod \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.679567 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-config\") pod \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\" (UID: \"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d\") " Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.684864 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-kube-api-access-gsn7p" (OuterVolumeSpecName: "kube-api-access-gsn7p") pod "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" (UID: "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d"). InnerVolumeSpecName "kube-api-access-gsn7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.724761 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-config" (OuterVolumeSpecName: "config") pod "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" (UID: "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.734013 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" (UID: "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.747028 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" (UID: "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.748227 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" (UID: "d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.782774 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsn7p\" (UniqueName: \"kubernetes.io/projected/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-kube-api-access-gsn7p\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.782828 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.782838 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.782848 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:00 crc kubenswrapper[4770]: I1209 14:46:00.782877 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:01 crc kubenswrapper[4770]: I1209 14:46:01.409944 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" event={"ID":"d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d","Type":"ContainerDied","Data":"9e0e4c189d283cfee9be63ec26664b5dbeb49c24c1f9687c8ae9c49998c99516"} Dec 09 14:46:01 crc kubenswrapper[4770]: I1209 14:46:01.409986 4770 scope.go:117] "RemoveContainer" containerID="3c35eac63a40b9ced8754d68bc84fb6212067662778f4f8b37fc4f134acfd619" Dec 09 14:46:01 crc kubenswrapper[4770]: I1209 14:46:01.410031 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-fw9zr" Dec 09 14:46:01 crc kubenswrapper[4770]: I1209 14:46:01.437448 4770 scope.go:117] "RemoveContainer" containerID="a99b06c297dc779da4bcf0e3f3d9e63fba5a71f67d9a0982debe51a3bb2bb151" Dec 09 14:46:01 crc kubenswrapper[4770]: I1209 14:46:01.444008 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fw9zr"] Dec 09 14:46:01 crc kubenswrapper[4770]: I1209 14:46:01.451960 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fw9zr"] Dec 09 14:46:02 crc kubenswrapper[4770]: I1209 14:46:02.598796 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" path="/var/lib/kubelet/pods/d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d/volumes" Dec 09 14:46:04 crc kubenswrapper[4770]: I1209 14:46:04.444065 4770 generic.go:334] "Generic (PLEG): container finished" podID="bb4d3067-7df3-4aae-ad4a-7e24c480d3f8" containerID="f45a558d9a8c1f9c48ddb2e355d74e089883112334498c973c1c3207379ddab4" exitCode=0 Dec 09 14:46:04 crc kubenswrapper[4770]: I1209 14:46:04.444109 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8","Type":"ContainerDied","Data":"f45a558d9a8c1f9c48ddb2e355d74e089883112334498c973c1c3207379ddab4"} Dec 09 14:46:05 crc kubenswrapper[4770]: I1209 14:46:05.457632 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8","Type":"ContainerStarted","Data":"6ad8ef96ccf1a5ae80490fc2f66f3af339f42b5ddeb4a3901b58529538252d53"} Dec 09 14:46:08 crc kubenswrapper[4770]: I1209 14:46:08.493957 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8","Type":"ContainerStarted","Data":"7f4c1c1f000ef364b694ac3e1b9a2143c8335fe46934d03cd726d3589d67d397"} Dec 09 14:46:10 crc kubenswrapper[4770]: I1209 14:46:10.527501 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb4d3067-7df3-4aae-ad4a-7e24c480d3f8","Type":"ContainerStarted","Data":"162152eabd05d4c59dcee921a381046da1b5cdf2f22c5d3e896d227e353dfc26"} Dec 09 14:46:11 crc kubenswrapper[4770]: I1209 14:46:11.579148 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.579127834 podStartE2EDuration="19.579127834s" podCreationTimestamp="2025-12-09 14:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:46:11.565228003 +0000 UTC m=+1403.461430139" watchObservedRunningTime="2025-12-09 14:46:11.579127834 +0000 UTC m=+1403.475329980" Dec 09 14:46:13 crc kubenswrapper[4770]: I1209 14:46:13.051080 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.244000 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.244117 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.244187 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.245348 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08e4c65cee400d2486f41106aee41be450d436f2cac9e02f916b74733c20d0e5"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.245428 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://08e4c65cee400d2486f41106aee41be450d436f2cac9e02f916b74733c20d0e5" gracePeriod=600 Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.570404 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="08e4c65cee400d2486f41106aee41be450d436f2cac9e02f916b74733c20d0e5" exitCode=0 Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.570461 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"08e4c65cee400d2486f41106aee41be450d436f2cac9e02f916b74733c20d0e5"} Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.570501 4770 scope.go:117] "RemoveContainer" containerID="0f16eaf98d6441c99fac37159c836b0846fa6ac7bd81ba244c2067e5f830e8c2" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.613433 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bk6s"] Dec 09 14:46:14 crc kubenswrapper[4770]: E1209 14:46:14.613862 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" containerName="dnsmasq-dns" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.613878 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" containerName="dnsmasq-dns" Dec 09 14:46:14 crc kubenswrapper[4770]: E1209 14:46:14.613891 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" containerName="ovn-config" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.613898 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" containerName="ovn-config" Dec 09 14:46:14 crc kubenswrapper[4770]: E1209 14:46:14.613911 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" containerName="init" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.613917 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" containerName="init" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.614103 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b71afd-5cdb-430e-b24f-1b7ee0fd8e22" containerName="ovn-config" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.614120 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0071ecd-b5e2-4ee8-ac00-5ff90be3b57d" containerName="dnsmasq-dns" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.615416 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.639330 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bk6s"] Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.749395 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndmkn\" (UniqueName: \"kubernetes.io/projected/f8c060ba-3638-443d-9c49-f097eaf5eb62-kube-api-access-ndmkn\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.749441 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-utilities\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.749873 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-catalog-content\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.852797 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-catalog-content\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.852879 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndmkn\" (UniqueName: \"kubernetes.io/projected/f8c060ba-3638-443d-9c49-f097eaf5eb62-kube-api-access-ndmkn\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.852900 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-utilities\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.853300 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-catalog-content\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.853314 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-utilities\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.874931 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndmkn\" (UniqueName: \"kubernetes.io/projected/f8c060ba-3638-443d-9c49-f097eaf5eb62-kube-api-access-ndmkn\") pod \"redhat-operators-4bk6s\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:14 crc kubenswrapper[4770]: I1209 14:46:14.943416 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:15 crc kubenswrapper[4770]: I1209 14:46:15.430293 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bk6s"] Dec 09 14:46:15 crc kubenswrapper[4770]: W1209 14:46:15.430568 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8c060ba_3638_443d_9c49_f097eaf5eb62.slice/crio-6cadf372a6664ca733855fa433b16a88d8ffeff4ff4bf5cb8eb1b54a1ec6abb8 WatchSource:0}: Error finding container 6cadf372a6664ca733855fa433b16a88d8ffeff4ff4bf5cb8eb1b54a1ec6abb8: Status 404 returned error can't find the container with id 6cadf372a6664ca733855fa433b16a88d8ffeff4ff4bf5cb8eb1b54a1ec6abb8 Dec 09 14:46:15 crc kubenswrapper[4770]: I1209 14:46:15.583826 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bk6s" event={"ID":"f8c060ba-3638-443d-9c49-f097eaf5eb62","Type":"ContainerStarted","Data":"6cadf372a6664ca733855fa433b16a88d8ffeff4ff4bf5cb8eb1b54a1ec6abb8"} Dec 09 14:46:15 crc kubenswrapper[4770]: I1209 14:46:15.587693 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b"} Dec 09 14:46:16 crc kubenswrapper[4770]: I1209 14:46:16.606145 4770 generic.go:334] "Generic (PLEG): container finished" podID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerID="bd855d08e0acfc412c9fbdc0cf66fa2ee88d36917f6fd3fc594edca08ad0338b" exitCode=0 Dec 09 14:46:16 crc kubenswrapper[4770]: I1209 14:46:16.610123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bk6s" event={"ID":"f8c060ba-3638-443d-9c49-f097eaf5eb62","Type":"ContainerDied","Data":"bd855d08e0acfc412c9fbdc0cf66fa2ee88d36917f6fd3fc594edca08ad0338b"} Dec 09 14:46:17 crc kubenswrapper[4770]: I1209 14:46:17.616635 4770 generic.go:334] "Generic (PLEG): container finished" podID="a0ed519f-b86b-4baf-a731-ffe46bc15641" containerID="5b85f319e49cb8e18272397cdb391b79af9b7da060e009d5224d66ddeab41df0" exitCode=0 Dec 09 14:46:17 crc kubenswrapper[4770]: I1209 14:46:17.616710 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xn76l" event={"ID":"a0ed519f-b86b-4baf-a731-ffe46bc15641","Type":"ContainerDied","Data":"5b85f319e49cb8e18272397cdb391b79af9b7da060e009d5224d66ddeab41df0"} Dec 09 14:46:18 crc kubenswrapper[4770]: I1209 14:46:18.632067 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bk6s" event={"ID":"f8c060ba-3638-443d-9c49-f097eaf5eb62","Type":"ContainerStarted","Data":"c1fd2ae75036f1ac811f290d15680720ed3b08cd141fb2287e5e59aee107d1f9"} Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.039898 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xn76l" Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.130584 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-combined-ca-bundle\") pod \"a0ed519f-b86b-4baf-a731-ffe46bc15641\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.130706 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ldkg\" (UniqueName: \"kubernetes.io/projected/a0ed519f-b86b-4baf-a731-ffe46bc15641-kube-api-access-7ldkg\") pod \"a0ed519f-b86b-4baf-a731-ffe46bc15641\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.130820 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-config-data\") pod \"a0ed519f-b86b-4baf-a731-ffe46bc15641\" (UID: \"a0ed519f-b86b-4baf-a731-ffe46bc15641\") " Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.137798 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ed519f-b86b-4baf-a731-ffe46bc15641-kube-api-access-7ldkg" (OuterVolumeSpecName: "kube-api-access-7ldkg") pod "a0ed519f-b86b-4baf-a731-ffe46bc15641" (UID: "a0ed519f-b86b-4baf-a731-ffe46bc15641"). InnerVolumeSpecName "kube-api-access-7ldkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.170594 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0ed519f-b86b-4baf-a731-ffe46bc15641" (UID: "a0ed519f-b86b-4baf-a731-ffe46bc15641"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.194611 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-config-data" (OuterVolumeSpecName: "config-data") pod "a0ed519f-b86b-4baf-a731-ffe46bc15641" (UID: "a0ed519f-b86b-4baf-a731-ffe46bc15641"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.233426 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.233471 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ed519f-b86b-4baf-a731-ffe46bc15641-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.233490 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ldkg\" (UniqueName: \"kubernetes.io/projected/a0ed519f-b86b-4baf-a731-ffe46bc15641-kube-api-access-7ldkg\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.643942 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xn76l" Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.644282 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xn76l" event={"ID":"a0ed519f-b86b-4baf-a731-ffe46bc15641","Type":"ContainerDied","Data":"3849be864bec3e3f2adc08f8fd1c937cba5de8b1bae66f3ffaed29f42c952ec1"} Dec 09 14:46:19 crc kubenswrapper[4770]: I1209 14:46:19.644320 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3849be864bec3e3f2adc08f8fd1c937cba5de8b1bae66f3ffaed29f42c952ec1" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.014321 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jv2s6"] Dec 09 14:46:20 crc kubenswrapper[4770]: E1209 14:46:20.014696 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ed519f-b86b-4baf-a731-ffe46bc15641" containerName="keystone-db-sync" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.014708 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ed519f-b86b-4baf-a731-ffe46bc15641" containerName="keystone-db-sync" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.014936 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ed519f-b86b-4baf-a731-ffe46bc15641" containerName="keystone-db-sync" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.015953 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.042562 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vt6df"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.043831 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.058636 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.058842 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.058947 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fnhpf" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.059045 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.059166 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.074815 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jv2s6"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.084794 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vt6df"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.152817 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.152862 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-svc\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.152907 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.153045 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-config-data\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.153098 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-fernet-keys\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.155791 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.155852 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-credential-keys\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.155890 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-combined-ca-bundle\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.155933 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-scripts\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.155968 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-config\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.156013 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8fk\" (UniqueName: \"kubernetes.io/projected/45da50a1-7520-4d42-8489-051d4df675fa-kube-api-access-ds8fk\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.156029 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2wfw\" (UniqueName: \"kubernetes.io/projected/bb61e89a-8782-4d2a-8677-6297a9e07a6c-kube-api-access-n2wfw\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.187571 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dpmnb"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.189187 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.199216 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-g6xdt" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.199521 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.199867 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.206544 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bwk2j"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.207786 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.217221 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pfjpt" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.217360 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.217470 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.223001 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dpmnb"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.243778 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bwk2j"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258338 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-scripts\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258395 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258431 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-svc\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577pq\" (UniqueName: \"kubernetes.io/projected/efd509c2-c0ac-450e-84d3-14e9e8935f1c-kube-api-access-577pq\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258492 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258544 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-config-data\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258564 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-combined-ca-bundle\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258593 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-config-data\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258619 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-combined-ca-bundle\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258646 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-fernet-keys\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258678 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg8n8\" (UniqueName: \"kubernetes.io/projected/b7669f5b-7406-4ef5-833b-f69821551b08-kube-api-access-tg8n8\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258709 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258764 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-credential-keys\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258791 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-db-sync-config-data\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258819 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-combined-ca-bundle\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258865 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-scripts\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258894 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-config\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258915 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7669f5b-7406-4ef5-833b-f69821551b08-etc-machine-id\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258944 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-config\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258970 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2wfw\" (UniqueName: \"kubernetes.io/projected/bb61e89a-8782-4d2a-8677-6297a9e07a6c-kube-api-access-n2wfw\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.258993 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds8fk\" (UniqueName: \"kubernetes.io/projected/45da50a1-7520-4d42-8489-051d4df675fa-kube-api-access-ds8fk\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.260251 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.260793 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-svc\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.261337 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.272821 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-config\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.273057 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.274993 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-config-data\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.276321 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-combined-ca-bundle\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.279147 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-scripts\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.279153 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-credential-keys\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.283940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-fernet-keys\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.300587 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds8fk\" (UniqueName: \"kubernetes.io/projected/45da50a1-7520-4d42-8489-051d4df675fa-kube-api-access-ds8fk\") pod \"keystone-bootstrap-vt6df\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.306790 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-k7gz4"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.308178 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.309480 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2wfw\" (UniqueName: \"kubernetes.io/projected/bb61e89a-8782-4d2a-8677-6297a9e07a6c-kube-api-access-n2wfw\") pod \"dnsmasq-dns-5b868669f-jv2s6\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.311099 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sbdj5" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.311359 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.326779 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-k7gz4"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.337155 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.360671 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7m5xq"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.362091 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.366719 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-db-sync-config-data\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.366858 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-combined-ca-bundle\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.366904 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7669f5b-7406-4ef5-833b-f69821551b08-etc-machine-id\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.366928 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-config\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.366976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-scripts\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367014 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-577pq\" (UniqueName: \"kubernetes.io/projected/efd509c2-c0ac-450e-84d3-14e9e8935f1c-kube-api-access-577pq\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367044 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-db-sync-config-data\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367060 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb2cj\" (UniqueName: \"kubernetes.io/projected/017d5a4f-99ba-4d3f-9053-207e6f414ab1-kube-api-access-pb2cj\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367094 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-config-data\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367112 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-combined-ca-bundle\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367138 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-combined-ca-bundle\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg8n8\" (UniqueName: \"kubernetes.io/projected/b7669f5b-7406-4ef5-833b-f69821551b08-kube-api-access-tg8n8\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367706 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367785 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.367981 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bzfdg" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.373037 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7669f5b-7406-4ef5-833b-f69821551b08-etc-machine-id\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.373100 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jv2s6"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.377442 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-combined-ca-bundle\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.382328 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.383260 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-config\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.383368 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-config-data\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.404145 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-db-sync-config-data\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.409308 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7m5xq"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.409952 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-combined-ca-bundle\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.415999 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg8n8\" (UniqueName: \"kubernetes.io/projected/b7669f5b-7406-4ef5-833b-f69821551b08-kube-api-access-tg8n8\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.419633 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-577pq\" (UniqueName: \"kubernetes.io/projected/efd509c2-c0ac-450e-84d3-14e9e8935f1c-kube-api-access-577pq\") pod \"neutron-db-sync-bwk2j\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.431562 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-scripts\") pod \"cinder-db-sync-dpmnb\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.476261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb2cj\" (UniqueName: \"kubernetes.io/projected/017d5a4f-99ba-4d3f-9053-207e6f414ab1-kube-api-access-pb2cj\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.476312 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-db-sync-config-data\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.476381 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-config-data\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.476405 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h5gd\" (UniqueName: \"kubernetes.io/projected/f4d4710e-a3e8-493e-9f2c-38839187d587-kube-api-access-6h5gd\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.476443 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-combined-ca-bundle\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.476472 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-scripts\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.476504 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d4710e-a3e8-493e-9f2c-38839187d587-logs\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.476548 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-combined-ca-bundle\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.482199 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-db-sync-config-data\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.482307 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9mw5m"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.485202 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-combined-ca-bundle\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.485561 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.506718 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.507283 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb2cj\" (UniqueName: \"kubernetes.io/projected/017d5a4f-99ba-4d3f-9053-207e6f414ab1-kube-api-access-pb2cj\") pod \"barbican-db-sync-k7gz4\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.531146 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.535961 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9mw5m"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.579445 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.579832 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.579867 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-config-data\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.579910 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h5gd\" (UniqueName: \"kubernetes.io/projected/f4d4710e-a3e8-493e-9f2c-38839187d587-kube-api-access-6h5gd\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.579968 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-scripts\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.579992 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45gn\" (UniqueName: \"kubernetes.io/projected/17af40df-932c-4a21-8463-5d60c1335003-kube-api-access-s45gn\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.580039 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d4710e-a3e8-493e-9f2c-38839187d587-logs\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.580093 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.580138 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-combined-ca-bundle\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.580202 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-svc\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.580253 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-config\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.583290 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d4710e-a3e8-493e-9f2c-38839187d587-logs\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.586481 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-scripts\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.594465 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-config-data\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.598031 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-combined-ca-bundle\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.649505 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.670311 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h5gd\" (UniqueName: \"kubernetes.io/projected/f4d4710e-a3e8-493e-9f2c-38839187d587-kube-api-access-6h5gd\") pod \"placement-db-sync-7m5xq\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.690767 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7m5xq" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.694108 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-svc\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.694224 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-config\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.694378 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.694437 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.694552 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45gn\" (UniqueName: \"kubernetes.io/projected/17af40df-932c-4a21-8463-5d60c1335003-kube-api-access-s45gn\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.694638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.699874 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.700342 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-svc\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.700491 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.700647 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-9vv8k"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.700831 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-config\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.700646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.708916 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.708827 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-9vv8k"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.725150 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.732754 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45gn\" (UniqueName: \"kubernetes.io/projected/17af40df-932c-4a21-8463-5d60c1335003-kube-api-access-s45gn\") pod \"dnsmasq-dns-cf78879c9-9mw5m\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.748970 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.749813 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-6zmqw" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.761028 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.777221 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.778487 4770 generic.go:334] "Generic (PLEG): container finished" podID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerID="c1fd2ae75036f1ac811f290d15680720ed3b08cd141fb2287e5e59aee107d1f9" exitCode=0 Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.799879 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.800146 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bk6s" event={"ID":"f8c060ba-3638-443d-9c49-f097eaf5eb62","Type":"ContainerDied","Data":"c1fd2ae75036f1ac811f290d15680720ed3b08cd141fb2287e5e59aee107d1f9"} Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.801081 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-certs\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.801135 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-config-data\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.800097 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.801198 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-combined-ca-bundle\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.801328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg6jq\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-kube-api-access-kg6jq\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.801359 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-scripts\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.806077 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.806258 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.903869 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.903937 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-config-data\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.903986 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-log-httpd\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904066 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnpcs\" (UniqueName: \"kubernetes.io/projected/7af45895-9497-4517-a8d8-56a64510ac72-kube-api-access-vnpcs\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904113 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-certs\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904150 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-config-data\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904196 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-combined-ca-bundle\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904244 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-scripts\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904336 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904360 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-run-httpd\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904384 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg6jq\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-kube-api-access-kg6jq\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.904413 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-scripts\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.947528 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jv2s6"] Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.964929 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-scripts\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.968392 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-certs\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.970919 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-config-data\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:20 crc kubenswrapper[4770]: I1209 14:46:20.985624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg6jq\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-kube-api-access-kg6jq\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.008105 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.008159 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-config-data\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.008185 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-log-httpd\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.008242 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnpcs\" (UniqueName: \"kubernetes.io/projected/7af45895-9497-4517-a8d8-56a64510ac72-kube-api-access-vnpcs\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.008316 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-scripts\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.008380 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.008406 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-run-httpd\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.013174 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-log-httpd\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.020998 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-run-httpd\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.025159 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.025745 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.025839 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-scripts\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.026337 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-combined-ca-bundle\") pod \"cloudkitty-db-sync-9vv8k\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.034079 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.082165 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.101305 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-config-data\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.109681 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnpcs\" (UniqueName: \"kubernetes.io/projected/7af45895-9497-4517-a8d8-56a64510ac72-kube-api-access-vnpcs\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.109818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnpcs\" (UniqueName: \"kubernetes.io/projected/7af45895-9497-4517-a8d8-56a64510ac72-kube-api-access-vnpcs\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.110231 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnpcs\" (UniqueName: \"kubernetes.io/projected/7af45895-9497-4517-a8d8-56a64510ac72-kube-api-access-vnpcs\") pod \"ceilometer-0\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.164760 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.205720 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vt6df"] Dec 09 14:46:21 crc kubenswrapper[4770]: W1209 14:46:21.279575 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45da50a1_7520_4d42_8489_051d4df675fa.slice/crio-801d42879fe7765b094a2f0f9c341c308580084864825665ba431d5d65681966 WatchSource:0}: Error finding container 801d42879fe7765b094a2f0f9c341c308580084864825665ba431d5d65681966: Status 404 returned error can't find the container with id 801d42879fe7765b094a2f0f9c341c308580084864825665ba431d5d65681966 Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.568452 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dpmnb"] Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.793012 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dpmnb" event={"ID":"b7669f5b-7406-4ef5-833b-f69821551b08","Type":"ContainerStarted","Data":"bb208201158a7402fc3bee9c1aa241b7aa41de924ed7a02b1800aa9eb449e229"} Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.795230 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vt6df" event={"ID":"45da50a1-7520-4d42-8489-051d4df675fa","Type":"ContainerStarted","Data":"801d42879fe7765b094a2f0f9c341c308580084864825665ba431d5d65681966"} Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.798453 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" event={"ID":"bb61e89a-8782-4d2a-8677-6297a9e07a6c","Type":"ContainerStarted","Data":"8731234d0631405cfe6a73ba2c8de50248a94a980933374a232e0cda0b417f76"} Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.827534 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bwk2j"] Dec 09 14:46:21 crc kubenswrapper[4770]: W1209 14:46:21.933207 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4d4710e_a3e8_493e_9f2c_38839187d587.slice/crio-4808f9d6bdb9565864a7344d4d1e28b9887b5cd3577a310c9e4bdf8cb8db21e7 WatchSource:0}: Error finding container 4808f9d6bdb9565864a7344d4d1e28b9887b5cd3577a310c9e4bdf8cb8db21e7: Status 404 returned error can't find the container with id 4808f9d6bdb9565864a7344d4d1e28b9887b5cd3577a310c9e4bdf8cb8db21e7 Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.936647 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-k7gz4"] Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.954897 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-9vv8k"] Dec 09 14:46:21 crc kubenswrapper[4770]: I1209 14:46:21.967966 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7m5xq"] Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.109993 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:46:22 crc kubenswrapper[4770]: W1209 14:46:22.111339 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17af40df_932c_4a21_8463_5d60c1335003.slice/crio-221c08b3b40e4b74c40757dd323f7704bbc279f5a807d05a1bc23e57828e3551 WatchSource:0}: Error finding container 221c08b3b40e4b74c40757dd323f7704bbc279f5a807d05a1bc23e57828e3551: Status 404 returned error can't find the container with id 221c08b3b40e4b74c40757dd323f7704bbc279f5a807d05a1bc23e57828e3551 Dec 09 14:46:22 crc kubenswrapper[4770]: W1209 14:46:22.112183 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7af45895_9497_4517_a8d8_56a64510ac72.slice/crio-aae06d4b854f01b6bffab73049e7ce497961f07ca87890aae0ec113ce5757b48 WatchSource:0}: Error finding container aae06d4b854f01b6bffab73049e7ce497961f07ca87890aae0ec113ce5757b48: Status 404 returned error can't find the container with id aae06d4b854f01b6bffab73049e7ce497961f07ca87890aae0ec113ce5757b48 Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.138595 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9mw5m"] Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.828245 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7m5xq" event={"ID":"f4d4710e-a3e8-493e-9f2c-38839187d587","Type":"ContainerStarted","Data":"4808f9d6bdb9565864a7344d4d1e28b9887b5cd3577a310c9e4bdf8cb8db21e7"} Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.842121 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7af45895-9497-4517-a8d8-56a64510ac72","Type":"ContainerStarted","Data":"aae06d4b854f01b6bffab73049e7ce497961f07ca87890aae0ec113ce5757b48"} Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.843910 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-9vv8k" event={"ID":"fbfdafc8-508d-4eec-9496-7058a6d1d49b","Type":"ContainerStarted","Data":"5b17c24b9554d45e8168b537e6b5949c8ef487fe02af25246ebca0ee6f6e0e44"} Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.851012 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" event={"ID":"17af40df-932c-4a21-8463-5d60c1335003","Type":"ContainerStarted","Data":"221c08b3b40e4b74c40757dd323f7704bbc279f5a807d05a1bc23e57828e3551"} Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.862296 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-k7gz4" event={"ID":"017d5a4f-99ba-4d3f-9053-207e6f414ab1","Type":"ContainerStarted","Data":"cf773d8cb26acd09a313f82ae248c4162e0bee5741784f49c79f4c7e00bd9cb6"} Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.868416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bwk2j" event={"ID":"efd509c2-c0ac-450e-84d3-14e9e8935f1c","Type":"ContainerStarted","Data":"095a8269502231109925ae3a8b9fe7ba78918b935e25e3f11a22782be92df643"} Dec 09 14:46:22 crc kubenswrapper[4770]: I1209 14:46:22.873916 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.051880 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.061024 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.880223 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vt6df" event={"ID":"45da50a1-7520-4d42-8489-051d4df675fa","Type":"ContainerStarted","Data":"54bac9614ea0d105bc38a11270fb1baa50c4b88f058010891af0698bff023f15"} Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.882296 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" event={"ID":"bb61e89a-8782-4d2a-8677-6297a9e07a6c","Type":"ContainerStarted","Data":"60e2e25dc5a318fe2855a16e28d2824b691fe80ee67a134d1f776a37731058e3"} Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.882341 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" podUID="bb61e89a-8782-4d2a-8677-6297a9e07a6c" containerName="init" containerID="cri-o://60e2e25dc5a318fe2855a16e28d2824b691fe80ee67a134d1f776a37731058e3" gracePeriod=10 Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.884226 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" event={"ID":"17af40df-932c-4a21-8463-5d60c1335003","Type":"ContainerStarted","Data":"03a34229ca2a22292286d9e3f4bc30e6398d55522b5cac71af8ef2866a11df39"} Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.886372 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bwk2j" event={"ID":"efd509c2-c0ac-450e-84d3-14e9e8935f1c","Type":"ContainerStarted","Data":"acb47c1355df5a7b18e5cccaf6e6aa2b38b84a39db33f01c8e8af0173cea0f8f"} Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.892825 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 14:46:23 crc kubenswrapper[4770]: I1209 14:46:23.994699 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bwk2j" podStartSLOduration=3.994679114 podStartE2EDuration="3.994679114s" podCreationTimestamp="2025-12-09 14:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:46:23.992697511 +0000 UTC m=+1415.888899647" watchObservedRunningTime="2025-12-09 14:46:23.994679114 +0000 UTC m=+1415.890881250" Dec 09 14:46:24 crc kubenswrapper[4770]: I1209 14:46:24.914337 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vt6df" podStartSLOduration=5.914321193 podStartE2EDuration="5.914321193s" podCreationTimestamp="2025-12-09 14:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:46:24.908989021 +0000 UTC m=+1416.805191157" watchObservedRunningTime="2025-12-09 14:46:24.914321193 +0000 UTC m=+1416.810523329" Dec 09 14:46:25 crc kubenswrapper[4770]: I1209 14:46:25.908500 4770 generic.go:334] "Generic (PLEG): container finished" podID="17af40df-932c-4a21-8463-5d60c1335003" containerID="03a34229ca2a22292286d9e3f4bc30e6398d55522b5cac71af8ef2866a11df39" exitCode=0 Dec 09 14:46:25 crc kubenswrapper[4770]: I1209 14:46:25.908594 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" event={"ID":"17af40df-932c-4a21-8463-5d60c1335003","Type":"ContainerDied","Data":"03a34229ca2a22292286d9e3f4bc30e6398d55522b5cac71af8ef2866a11df39"} Dec 09 14:46:25 crc kubenswrapper[4770]: I1209 14:46:25.915210 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bk6s" event={"ID":"f8c060ba-3638-443d-9c49-f097eaf5eb62","Type":"ContainerStarted","Data":"0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b"} Dec 09 14:46:25 crc kubenswrapper[4770]: I1209 14:46:25.937889 4770 generic.go:334] "Generic (PLEG): container finished" podID="bb61e89a-8782-4d2a-8677-6297a9e07a6c" containerID="60e2e25dc5a318fe2855a16e28d2824b691fe80ee67a134d1f776a37731058e3" exitCode=0 Dec 09 14:46:25 crc kubenswrapper[4770]: I1209 14:46:25.937957 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" event={"ID":"bb61e89a-8782-4d2a-8677-6297a9e07a6c","Type":"ContainerDied","Data":"60e2e25dc5a318fe2855a16e28d2824b691fe80ee67a134d1f776a37731058e3"} Dec 09 14:46:25 crc kubenswrapper[4770]: I1209 14:46:25.967702 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bk6s" podStartSLOduration=4.589503968 podStartE2EDuration="11.967682153s" podCreationTimestamp="2025-12-09 14:46:14 +0000 UTC" firstStartedPulling="2025-12-09 14:46:16.609581544 +0000 UTC m=+1408.505783690" lastFinishedPulling="2025-12-09 14:46:23.987759739 +0000 UTC m=+1415.883961875" observedRunningTime="2025-12-09 14:46:25.961501958 +0000 UTC m=+1417.857704094" watchObservedRunningTime="2025-12-09 14:46:25.967682153 +0000 UTC m=+1417.863884289" Dec 09 14:46:26 crc kubenswrapper[4770]: I1209 14:46:26.956050 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" event={"ID":"bb61e89a-8782-4d2a-8677-6297a9e07a6c","Type":"ContainerDied","Data":"8731234d0631405cfe6a73ba2c8de50248a94a980933374a232e0cda0b417f76"} Dec 09 14:46:26 crc kubenswrapper[4770]: I1209 14:46:26.956600 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8731234d0631405cfe6a73ba2c8de50248a94a980933374a232e0cda0b417f76" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.006320 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.177961 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2wfw\" (UniqueName: \"kubernetes.io/projected/bb61e89a-8782-4d2a-8677-6297a9e07a6c-kube-api-access-n2wfw\") pod \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.178379 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-config\") pod \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.178526 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-swift-storage-0\") pod \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.178722 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-svc\") pod \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.179006 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-sb\") pod \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.179201 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-nb\") pod \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\" (UID: \"bb61e89a-8782-4d2a-8677-6297a9e07a6c\") " Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.192518 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb61e89a-8782-4d2a-8677-6297a9e07a6c-kube-api-access-n2wfw" (OuterVolumeSpecName: "kube-api-access-n2wfw") pod "bb61e89a-8782-4d2a-8677-6297a9e07a6c" (UID: "bb61e89a-8782-4d2a-8677-6297a9e07a6c"). InnerVolumeSpecName "kube-api-access-n2wfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.216560 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-config" (OuterVolumeSpecName: "config") pod "bb61e89a-8782-4d2a-8677-6297a9e07a6c" (UID: "bb61e89a-8782-4d2a-8677-6297a9e07a6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.217022 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bb61e89a-8782-4d2a-8677-6297a9e07a6c" (UID: "bb61e89a-8782-4d2a-8677-6297a9e07a6c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.224220 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bb61e89a-8782-4d2a-8677-6297a9e07a6c" (UID: "bb61e89a-8782-4d2a-8677-6297a9e07a6c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.231196 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb61e89a-8782-4d2a-8677-6297a9e07a6c" (UID: "bb61e89a-8782-4d2a-8677-6297a9e07a6c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.243076 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bb61e89a-8782-4d2a-8677-6297a9e07a6c" (UID: "bb61e89a-8782-4d2a-8677-6297a9e07a6c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.282619 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2wfw\" (UniqueName: \"kubernetes.io/projected/bb61e89a-8782-4d2a-8677-6297a9e07a6c-kube-api-access-n2wfw\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.282661 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.282676 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.282690 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.282702 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:27 crc kubenswrapper[4770]: I1209 14:46:27.282712 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb61e89a-8782-4d2a-8677-6297a9e07a6c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:46:28 crc kubenswrapper[4770]: I1209 14:46:28.006499 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-jv2s6" Dec 09 14:46:28 crc kubenswrapper[4770]: I1209 14:46:28.006662 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" event={"ID":"17af40df-932c-4a21-8463-5d60c1335003","Type":"ContainerStarted","Data":"eb82728c68a6ad911d750f0b3dbba376cc511c9d7e41717998f951aadf851a33"} Dec 09 14:46:28 crc kubenswrapper[4770]: I1209 14:46:28.007815 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:28 crc kubenswrapper[4770]: I1209 14:46:28.027423 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" podStartSLOduration=8.027406388 podStartE2EDuration="8.027406388s" podCreationTimestamp="2025-12-09 14:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:46:28.021684366 +0000 UTC m=+1419.917886512" watchObservedRunningTime="2025-12-09 14:46:28.027406388 +0000 UTC m=+1419.923608524" Dec 09 14:46:28 crc kubenswrapper[4770]: I1209 14:46:28.066388 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jv2s6"] Dec 09 14:46:28 crc kubenswrapper[4770]: I1209 14:46:28.084393 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jv2s6"] Dec 09 14:46:28 crc kubenswrapper[4770]: I1209 14:46:28.604223 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb61e89a-8782-4d2a-8677-6297a9e07a6c" path="/var/lib/kubelet/pods/bb61e89a-8782-4d2a-8677-6297a9e07a6c/volumes" Dec 09 14:46:34 crc kubenswrapper[4770]: I1209 14:46:34.943893 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:34 crc kubenswrapper[4770]: I1209 14:46:34.944954 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:35 crc kubenswrapper[4770]: I1209 14:46:35.008831 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:35 crc kubenswrapper[4770]: I1209 14:46:35.151199 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:46:35 crc kubenswrapper[4770]: I1209 14:46:35.244144 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bk6s"] Dec 09 14:46:36 crc kubenswrapper[4770]: I1209 14:46:36.035915 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:46:36 crc kubenswrapper[4770]: I1209 14:46:36.082936 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-djf6f"] Dec 09 14:46:36 crc kubenswrapper[4770]: I1209 14:46:36.083173 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" containerID="cri-o://07d3e5b68dd279277808cdf199c174824e85036bb88967c64a98b3b292722743" gracePeriod=10 Dec 09 14:46:36 crc kubenswrapper[4770]: I1209 14:46:36.914190 4770 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.242552824s: [/var/lib/containers/storage/overlay/71ff5054b086970d5d2717e9a07205f057f41c20be475a37c127279eeb10afd0/diff /var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/account-replicator/0.log]; will not log again for this container unless duration exceeds 2s Dec 09 14:46:37 crc kubenswrapper[4770]: I1209 14:46:37.124132 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bk6s" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="registry-server" containerID="cri-o://0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" gracePeriod=2 Dec 09 14:46:38 crc kubenswrapper[4770]: I1209 14:46:38.135773 4770 generic.go:334] "Generic (PLEG): container finished" podID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" exitCode=0 Dec 09 14:46:38 crc kubenswrapper[4770]: I1209 14:46:38.135867 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bk6s" event={"ID":"f8c060ba-3638-443d-9c49-f097eaf5eb62","Type":"ContainerDied","Data":"0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b"} Dec 09 14:46:38 crc kubenswrapper[4770]: I1209 14:46:38.138716 4770 generic.go:334] "Generic (PLEG): container finished" podID="d22da327-1d9a-49b8-b67a-317118888295" containerID="07d3e5b68dd279277808cdf199c174824e85036bb88967c64a98b3b292722743" exitCode=0 Dec 09 14:46:38 crc kubenswrapper[4770]: I1209 14:46:38.138749 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" event={"ID":"d22da327-1d9a-49b8-b67a-317118888295","Type":"ContainerDied","Data":"07d3e5b68dd279277808cdf199c174824e85036bb88967c64a98b3b292722743"} Dec 09 14:46:39 crc kubenswrapper[4770]: I1209 14:46:39.833196 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Dec 09 14:46:44 crc kubenswrapper[4770]: I1209 14:46:44.833006 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Dec 09 14:46:44 crc kubenswrapper[4770]: E1209 14:46:44.944203 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:46:44 crc kubenswrapper[4770]: E1209 14:46:44.944773 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:46:44 crc kubenswrapper[4770]: E1209 14:46:44.945093 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:46:44 crc kubenswrapper[4770]: E1209 14:46:44.945198 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-4bk6s" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="registry-server" Dec 09 14:46:49 crc kubenswrapper[4770]: I1209 14:46:49.833053 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Dec 09 14:46:49 crc kubenswrapper[4770]: I1209 14:46:49.835667 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:46:54 crc kubenswrapper[4770]: I1209 14:46:54.326344 4770 generic.go:334] "Generic (PLEG): container finished" podID="45da50a1-7520-4d42-8489-051d4df675fa" containerID="54bac9614ea0d105bc38a11270fb1baa50c4b88f058010891af0698bff023f15" exitCode=0 Dec 09 14:46:54 crc kubenswrapper[4770]: I1209 14:46:54.326462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vt6df" event={"ID":"45da50a1-7520-4d42-8489-051d4df675fa","Type":"ContainerDied","Data":"54bac9614ea0d105bc38a11270fb1baa50c4b88f058010891af0698bff023f15"} Dec 09 14:46:54 crc kubenswrapper[4770]: I1209 14:46:54.832959 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Dec 09 14:46:54 crc kubenswrapper[4770]: E1209 14:46:54.944497 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:46:54 crc kubenswrapper[4770]: E1209 14:46:54.944874 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:46:54 crc kubenswrapper[4770]: E1209 14:46:54.946634 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:46:54 crc kubenswrapper[4770]: E1209 14:46:54.946672 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-4bk6s" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="registry-server" Dec 09 14:46:56 crc kubenswrapper[4770]: E1209 14:46:56.277181 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 09 14:46:56 crc kubenswrapper[4770]: E1209 14:46:56.277656 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65h67h68fh66fh55ch57hch698hf8h5d5h574h677hf6hbbh544h588h5d5h555hc8h94h5cfh55dh675h644h5cdh574h9fh8bh5dch6bh5c6h575q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnpcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7af45895-9497-4517-a8d8-56a64510ac72): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:46:59 crc kubenswrapper[4770]: I1209 14:46:59.833432 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Dec 09 14:47:04 crc kubenswrapper[4770]: I1209 14:47:04.833426 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Dec 09 14:47:04 crc kubenswrapper[4770]: E1209 14:47:04.944986 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:47:04 crc kubenswrapper[4770]: E1209 14:47:04.945703 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:47:04 crc kubenswrapper[4770]: E1209 14:47:04.946050 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:47:04 crc kubenswrapper[4770]: E1209 14:47:04.946098 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-4bk6s" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="registry-server" Dec 09 14:47:09 crc kubenswrapper[4770]: E1209 14:47:09.578187 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 09 14:47:09 crc kubenswrapper[4770]: E1209 14:47:09.578951 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-k7gz4_openstack(017d5a4f-99ba-4d3f-9053-207e6f414ab1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:47:09 crc kubenswrapper[4770]: E1209 14:47:09.580199 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-k7gz4" podUID="017d5a4f-99ba-4d3f-9053-207e6f414ab1" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.668074 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.846035 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-scripts\") pod \"45da50a1-7520-4d42-8489-051d4df675fa\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.846134 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-combined-ca-bundle\") pod \"45da50a1-7520-4d42-8489-051d4df675fa\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.846177 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds8fk\" (UniqueName: \"kubernetes.io/projected/45da50a1-7520-4d42-8489-051d4df675fa-kube-api-access-ds8fk\") pod \"45da50a1-7520-4d42-8489-051d4df675fa\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.846304 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-credential-keys\") pod \"45da50a1-7520-4d42-8489-051d4df675fa\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.846387 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-config-data\") pod \"45da50a1-7520-4d42-8489-051d4df675fa\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.847181 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-fernet-keys\") pod \"45da50a1-7520-4d42-8489-051d4df675fa\" (UID: \"45da50a1-7520-4d42-8489-051d4df675fa\") " Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.853254 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "45da50a1-7520-4d42-8489-051d4df675fa" (UID: "45da50a1-7520-4d42-8489-051d4df675fa"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.854155 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-scripts" (OuterVolumeSpecName: "scripts") pod "45da50a1-7520-4d42-8489-051d4df675fa" (UID: "45da50a1-7520-4d42-8489-051d4df675fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.855855 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "45da50a1-7520-4d42-8489-051d4df675fa" (UID: "45da50a1-7520-4d42-8489-051d4df675fa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.858993 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45da50a1-7520-4d42-8489-051d4df675fa-kube-api-access-ds8fk" (OuterVolumeSpecName: "kube-api-access-ds8fk") pod "45da50a1-7520-4d42-8489-051d4df675fa" (UID: "45da50a1-7520-4d42-8489-051d4df675fa"). InnerVolumeSpecName "kube-api-access-ds8fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.881321 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-config-data" (OuterVolumeSpecName: "config-data") pod "45da50a1-7520-4d42-8489-051d4df675fa" (UID: "45da50a1-7520-4d42-8489-051d4df675fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.885242 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45da50a1-7520-4d42-8489-051d4df675fa" (UID: "45da50a1-7520-4d42-8489-051d4df675fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.949489 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.949521 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds8fk\" (UniqueName: \"kubernetes.io/projected/45da50a1-7520-4d42-8489-051d4df675fa-kube-api-access-ds8fk\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.949532 4770 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.949541 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.949549 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:09 crc kubenswrapper[4770]: I1209 14:47:09.949559 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45da50a1-7520-4d42-8489-051d4df675fa-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.551190 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vt6df" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.552372 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vt6df" event={"ID":"45da50a1-7520-4d42-8489-051d4df675fa","Type":"ContainerDied","Data":"801d42879fe7765b094a2f0f9c341c308580084864825665ba431d5d65681966"} Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.552628 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="801d42879fe7765b094a2f0f9c341c308580084864825665ba431d5d65681966" Dec 09 14:47:10 crc kubenswrapper[4770]: E1209 14:47:10.554721 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-k7gz4" podUID="017d5a4f-99ba-4d3f-9053-207e6f414ab1" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.797159 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vt6df"] Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.805563 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vt6df"] Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.918373 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vg222"] Dec 09 14:47:10 crc kubenswrapper[4770]: E1209 14:47:10.919082 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb61e89a-8782-4d2a-8677-6297a9e07a6c" containerName="init" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.919099 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb61e89a-8782-4d2a-8677-6297a9e07a6c" containerName="init" Dec 09 14:47:10 crc kubenswrapper[4770]: E1209 14:47:10.919128 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45da50a1-7520-4d42-8489-051d4df675fa" containerName="keystone-bootstrap" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.919135 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="45da50a1-7520-4d42-8489-051d4df675fa" containerName="keystone-bootstrap" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.919314 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb61e89a-8782-4d2a-8677-6297a9e07a6c" containerName="init" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.919341 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="45da50a1-7520-4d42-8489-051d4df675fa" containerName="keystone-bootstrap" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.920029 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.924658 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.925165 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fnhpf" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.925302 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.926110 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.931532 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vg222"] Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.980332 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-combined-ca-bundle\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.980586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-config-data\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.980666 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-fernet-keys\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.980811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvd76\" (UniqueName: \"kubernetes.io/projected/1346b759-25b3-41df-9fa0-b7b1137fce00-kube-api-access-bvd76\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.981029 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-credential-keys\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:10 crc kubenswrapper[4770]: I1209 14:47:10.981111 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-scripts\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.081779 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvd76\" (UniqueName: \"kubernetes.io/projected/1346b759-25b3-41df-9fa0-b7b1137fce00-kube-api-access-bvd76\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.081890 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-credential-keys\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.081914 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-scripts\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.081998 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-combined-ca-bundle\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.082036 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-config-data\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.082059 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-fernet-keys\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.087551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-scripts\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.087668 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-fernet-keys\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.088245 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-config-data\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.089091 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-combined-ca-bundle\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.091215 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-credential-keys\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.124045 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvd76\" (UniqueName: \"kubernetes.io/projected/1346b759-25b3-41df-9fa0-b7b1137fce00-kube-api-access-bvd76\") pod \"keystone-bootstrap-vg222\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: E1209 14:47:11.137084 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 09 14:47:11 crc kubenswrapper[4770]: E1209 14:47:11.137332 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tg8n8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dpmnb_openstack(b7669f5b-7406-4ef5-833b-f69821551b08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:47:11 crc kubenswrapper[4770]: E1209 14:47:11.138549 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dpmnb" podUID="b7669f5b-7406-4ef5-833b-f69821551b08" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.231986 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.234652 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.243506 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.387958 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-catalog-content\") pod \"f8c060ba-3638-443d-9c49-f097eaf5eb62\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.388054 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-svc\") pod \"d22da327-1d9a-49b8-b67a-317118888295\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.388098 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-swift-storage-0\") pod \"d22da327-1d9a-49b8-b67a-317118888295\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.388159 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-sb\") pod \"d22da327-1d9a-49b8-b67a-317118888295\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.388212 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-nb\") pod \"d22da327-1d9a-49b8-b67a-317118888295\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.388264 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndmkn\" (UniqueName: \"kubernetes.io/projected/f8c060ba-3638-443d-9c49-f097eaf5eb62-kube-api-access-ndmkn\") pod \"f8c060ba-3638-443d-9c49-f097eaf5eb62\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.388307 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvhpn\" (UniqueName: \"kubernetes.io/projected/d22da327-1d9a-49b8-b67a-317118888295-kube-api-access-lvhpn\") pod \"d22da327-1d9a-49b8-b67a-317118888295\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.388322 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-config\") pod \"d22da327-1d9a-49b8-b67a-317118888295\" (UID: \"d22da327-1d9a-49b8-b67a-317118888295\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.388359 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-utilities\") pod \"f8c060ba-3638-443d-9c49-f097eaf5eb62\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.392606 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22da327-1d9a-49b8-b67a-317118888295-kube-api-access-lvhpn" (OuterVolumeSpecName: "kube-api-access-lvhpn") pod "d22da327-1d9a-49b8-b67a-317118888295" (UID: "d22da327-1d9a-49b8-b67a-317118888295"). InnerVolumeSpecName "kube-api-access-lvhpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.392803 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-utilities" (OuterVolumeSpecName: "utilities") pod "f8c060ba-3638-443d-9c49-f097eaf5eb62" (UID: "f8c060ba-3638-443d-9c49-f097eaf5eb62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.394035 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c060ba-3638-443d-9c49-f097eaf5eb62-kube-api-access-ndmkn" (OuterVolumeSpecName: "kube-api-access-ndmkn") pod "f8c060ba-3638-443d-9c49-f097eaf5eb62" (UID: "f8c060ba-3638-443d-9c49-f097eaf5eb62"). InnerVolumeSpecName "kube-api-access-ndmkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.437096 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d22da327-1d9a-49b8-b67a-317118888295" (UID: "d22da327-1d9a-49b8-b67a-317118888295"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.439521 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d22da327-1d9a-49b8-b67a-317118888295" (UID: "d22da327-1d9a-49b8-b67a-317118888295"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.440211 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-config" (OuterVolumeSpecName: "config") pod "d22da327-1d9a-49b8-b67a-317118888295" (UID: "d22da327-1d9a-49b8-b67a-317118888295"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.461768 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d22da327-1d9a-49b8-b67a-317118888295" (UID: "d22da327-1d9a-49b8-b67a-317118888295"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.461944 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d22da327-1d9a-49b8-b67a-317118888295" (UID: "d22da327-1d9a-49b8-b67a-317118888295"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.488937 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8c060ba-3638-443d-9c49-f097eaf5eb62" (UID: "f8c060ba-3638-443d-9c49-f097eaf5eb62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.489542 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-catalog-content\") pod \"f8c060ba-3638-443d-9c49-f097eaf5eb62\" (UID: \"f8c060ba-3638-443d-9c49-f097eaf5eb62\") " Dec 09 14:47:11 crc kubenswrapper[4770]: W1209 14:47:11.489668 4770 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f8c060ba-3638-443d-9c49-f097eaf5eb62/volumes/kubernetes.io~empty-dir/catalog-content Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.489682 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8c060ba-3638-443d-9c49-f097eaf5eb62" (UID: "f8c060ba-3638-443d-9c49-f097eaf5eb62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490034 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490049 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490060 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490068 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490075 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490083 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndmkn\" (UniqueName: \"kubernetes.io/projected/f8c060ba-3638-443d-9c49-f097eaf5eb62-kube-api-access-ndmkn\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490092 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvhpn\" (UniqueName: \"kubernetes.io/projected/d22da327-1d9a-49b8-b67a-317118888295-kube-api-access-lvhpn\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490100 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22da327-1d9a-49b8-b67a-317118888295-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.490108 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c060ba-3638-443d-9c49-f097eaf5eb62-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.562782 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bk6s" event={"ID":"f8c060ba-3638-443d-9c49-f097eaf5eb62","Type":"ContainerDied","Data":"6cadf372a6664ca733855fa433b16a88d8ffeff4ff4bf5cb8eb1b54a1ec6abb8"} Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.563142 4770 scope.go:117] "RemoveContainer" containerID="0d141f6cee6fee6d83eed06210c8e976bbe141943a604aeb21cb094c7610a62b" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.562880 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bk6s" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.567291 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.576952 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" event={"ID":"d22da327-1d9a-49b8-b67a-317118888295","Type":"ContainerDied","Data":"c7a278a2b622d4cc8d31f3640251f97bfbce87700873214677d418e006cb5c39"} Dec 09 14:47:11 crc kubenswrapper[4770]: E1209 14:47:11.580822 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-dpmnb" podUID="b7669f5b-7406-4ef5-833b-f69821551b08" Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.623444 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bk6s"] Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.636128 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bk6s"] Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.648386 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-djf6f"] Dec 09 14:47:11 crc kubenswrapper[4770]: I1209 14:47:11.657164 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-djf6f"] Dec 09 14:47:12 crc kubenswrapper[4770]: I1209 14:47:12.601083 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45da50a1-7520-4d42-8489-051d4df675fa" path="/var/lib/kubelet/pods/45da50a1-7520-4d42-8489-051d4df675fa/volumes" Dec 09 14:47:12 crc kubenswrapper[4770]: I1209 14:47:12.601713 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22da327-1d9a-49b8-b67a-317118888295" path="/var/lib/kubelet/pods/d22da327-1d9a-49b8-b67a-317118888295/volumes" Dec 09 14:47:12 crc kubenswrapper[4770]: I1209 14:47:12.602353 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" path="/var/lib/kubelet/pods/f8c060ba-3638-443d-9c49-f097eaf5eb62/volumes" Dec 09 14:47:14 crc kubenswrapper[4770]: I1209 14:47:14.833988 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-djf6f" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: i/o timeout" Dec 09 14:47:15 crc kubenswrapper[4770]: E1209 14:47:15.708210 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Dec 09 14:47:15 crc kubenswrapper[4770]: E1209 14:47:15.708271 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Dec 09 14:47:15 crc kubenswrapper[4770]: E1209 14:47:15.708409 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg6jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-9vv8k_openstack(fbfdafc8-508d-4eec-9496-7058a6d1d49b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:47:15 crc kubenswrapper[4770]: E1209 14:47:15.709628 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-9vv8k" podUID="fbfdafc8-508d-4eec-9496-7058a6d1d49b" Dec 09 14:47:15 crc kubenswrapper[4770]: I1209 14:47:15.954095 4770 scope.go:117] "RemoveContainer" containerID="c1fd2ae75036f1ac811f290d15680720ed3b08cd141fb2287e5e59aee107d1f9" Dec 09 14:47:16 crc kubenswrapper[4770]: I1209 14:47:16.413083 4770 scope.go:117] "RemoveContainer" containerID="bd855d08e0acfc412c9fbdc0cf66fa2ee88d36917f6fd3fc594edca08ad0338b" Dec 09 14:47:16 crc kubenswrapper[4770]: I1209 14:47:16.603252 4770 scope.go:117] "RemoveContainer" containerID="07d3e5b68dd279277808cdf199c174824e85036bb88967c64a98b3b292722743" Dec 09 14:47:16 crc kubenswrapper[4770]: I1209 14:47:16.622085 4770 scope.go:117] "RemoveContainer" containerID="4af8e55aac0150aed98cb7ebe057976bb13db3f3f9a9d089763a5cc104f205dd" Dec 09 14:47:16 crc kubenswrapper[4770]: E1209 14:47:16.627981 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-9vv8k" podUID="fbfdafc8-508d-4eec-9496-7058a6d1d49b" Dec 09 14:47:16 crc kubenswrapper[4770]: I1209 14:47:16.881953 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vg222"] Dec 09 14:47:17 crc kubenswrapper[4770]: I1209 14:47:17.645073 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7m5xq" event={"ID":"f4d4710e-a3e8-493e-9f2c-38839187d587","Type":"ContainerStarted","Data":"9ef5bd99517f705fc3c0911971efa70546fd0f1113c3d6496c609e89f7f6d6bd"} Dec 09 14:47:17 crc kubenswrapper[4770]: I1209 14:47:17.648280 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7af45895-9497-4517-a8d8-56a64510ac72","Type":"ContainerStarted","Data":"9f869192beb541bee58388bbc97590eb9af7eaa1cfa9e7849ed66e6871204ac6"} Dec 09 14:47:17 crc kubenswrapper[4770]: I1209 14:47:17.650004 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vg222" event={"ID":"1346b759-25b3-41df-9fa0-b7b1137fce00","Type":"ContainerStarted","Data":"2825d1689691058f8fb46a00939a2daa4df66362ee5fa626f8922f60b83cf711"} Dec 09 14:47:17 crc kubenswrapper[4770]: I1209 14:47:17.650030 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vg222" event={"ID":"1346b759-25b3-41df-9fa0-b7b1137fce00","Type":"ContainerStarted","Data":"802f63bb3ea2d506cccd7e017d54dc6b5b4d8ac83090df691cb046dc4a9c2e95"} Dec 09 14:47:17 crc kubenswrapper[4770]: I1209 14:47:17.683264 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7m5xq" podStartSLOduration=8.533663719 podStartE2EDuration="57.683245296s" podCreationTimestamp="2025-12-09 14:46:20 +0000 UTC" firstStartedPulling="2025-12-09 14:46:21.934883417 +0000 UTC m=+1413.831085553" lastFinishedPulling="2025-12-09 14:47:11.084464994 +0000 UTC m=+1462.980667130" observedRunningTime="2025-12-09 14:47:17.66092445 +0000 UTC m=+1469.557126606" watchObservedRunningTime="2025-12-09 14:47:17.683245296 +0000 UTC m=+1469.579447452" Dec 09 14:47:17 crc kubenswrapper[4770]: I1209 14:47:17.685789 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vg222" podStartSLOduration=7.685780414 podStartE2EDuration="7.685780414s" podCreationTimestamp="2025-12-09 14:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:17.676453214 +0000 UTC m=+1469.572655350" watchObservedRunningTime="2025-12-09 14:47:17.685780414 +0000 UTC m=+1469.581982560" Dec 09 14:47:20 crc kubenswrapper[4770]: I1209 14:47:20.687509 4770 generic.go:334] "Generic (PLEG): container finished" podID="b0c0c709-cb21-49e0-ba23-211f0cd1749d" containerID="53a17b5b76555580cebf00cc3ccdfccc549095929ee984660b7daf6437f7f746" exitCode=0 Dec 09 14:47:20 crc kubenswrapper[4770]: I1209 14:47:20.687634 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gbddk" event={"ID":"b0c0c709-cb21-49e0-ba23-211f0cd1749d","Type":"ContainerDied","Data":"53a17b5b76555580cebf00cc3ccdfccc549095929ee984660b7daf6437f7f746"} Dec 09 14:47:20 crc kubenswrapper[4770]: I1209 14:47:20.694502 4770 generic.go:334] "Generic (PLEG): container finished" podID="1346b759-25b3-41df-9fa0-b7b1137fce00" containerID="2825d1689691058f8fb46a00939a2daa4df66362ee5fa626f8922f60b83cf711" exitCode=0 Dec 09 14:47:20 crc kubenswrapper[4770]: I1209 14:47:20.694606 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vg222" event={"ID":"1346b759-25b3-41df-9fa0-b7b1137fce00","Type":"ContainerDied","Data":"2825d1689691058f8fb46a00939a2daa4df66362ee5fa626f8922f60b83cf711"} Dec 09 14:47:20 crc kubenswrapper[4770]: I1209 14:47:20.701165 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4d4710e-a3e8-493e-9f2c-38839187d587" containerID="9ef5bd99517f705fc3c0911971efa70546fd0f1113c3d6496c609e89f7f6d6bd" exitCode=0 Dec 09 14:47:20 crc kubenswrapper[4770]: I1209 14:47:20.701240 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7m5xq" event={"ID":"f4d4710e-a3e8-493e-9f2c-38839187d587","Type":"ContainerDied","Data":"9ef5bd99517f705fc3c0911971efa70546fd0f1113c3d6496c609e89f7f6d6bd"} Dec 09 14:47:21 crc kubenswrapper[4770]: I1209 14:47:21.715715 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7af45895-9497-4517-a8d8-56a64510ac72","Type":"ContainerStarted","Data":"15d5989033f41b50ebe88317af224c7c7eda49f0d83907739d3f5f6a18e7e040"} Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.258906 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.348359 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7m5xq" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.350122 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gbddk" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.396174 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-config-data\") pod \"1346b759-25b3-41df-9fa0-b7b1137fce00\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.396216 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-credential-keys\") pod \"1346b759-25b3-41df-9fa0-b7b1137fce00\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.396286 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-scripts\") pod \"1346b759-25b3-41df-9fa0-b7b1137fce00\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.396318 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-combined-ca-bundle\") pod \"1346b759-25b3-41df-9fa0-b7b1137fce00\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.396432 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-fernet-keys\") pod \"1346b759-25b3-41df-9fa0-b7b1137fce00\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.396467 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvd76\" (UniqueName: \"kubernetes.io/projected/1346b759-25b3-41df-9fa0-b7b1137fce00-kube-api-access-bvd76\") pod \"1346b759-25b3-41df-9fa0-b7b1137fce00\" (UID: \"1346b759-25b3-41df-9fa0-b7b1137fce00\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.403225 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1346b759-25b3-41df-9fa0-b7b1137fce00" (UID: "1346b759-25b3-41df-9fa0-b7b1137fce00"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.404902 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1346b759-25b3-41df-9fa0-b7b1137fce00" (UID: "1346b759-25b3-41df-9fa0-b7b1137fce00"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.417020 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1346b759-25b3-41df-9fa0-b7b1137fce00-kube-api-access-bvd76" (OuterVolumeSpecName: "kube-api-access-bvd76") pod "1346b759-25b3-41df-9fa0-b7b1137fce00" (UID: "1346b759-25b3-41df-9fa0-b7b1137fce00"). InnerVolumeSpecName "kube-api-access-bvd76". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.418989 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-scripts" (OuterVolumeSpecName: "scripts") pod "1346b759-25b3-41df-9fa0-b7b1137fce00" (UID: "1346b759-25b3-41df-9fa0-b7b1137fce00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.436270 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-config-data" (OuterVolumeSpecName: "config-data") pod "1346b759-25b3-41df-9fa0-b7b1137fce00" (UID: "1346b759-25b3-41df-9fa0-b7b1137fce00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.452970 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1346b759-25b3-41df-9fa0-b7b1137fce00" (UID: "1346b759-25b3-41df-9fa0-b7b1137fce00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.498653 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d4710e-a3e8-493e-9f2c-38839187d587-logs\") pod \"f4d4710e-a3e8-493e-9f2c-38839187d587\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.498694 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-scripts\") pod \"f4d4710e-a3e8-493e-9f2c-38839187d587\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.498816 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-combined-ca-bundle\") pod \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.498911 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl8gd\" (UniqueName: \"kubernetes.io/projected/b0c0c709-cb21-49e0-ba23-211f0cd1749d-kube-api-access-zl8gd\") pod \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.498929 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-config-data\") pod \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.498954 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h5gd\" (UniqueName: \"kubernetes.io/projected/f4d4710e-a3e8-493e-9f2c-38839187d587-kube-api-access-6h5gd\") pod \"f4d4710e-a3e8-493e-9f2c-38839187d587\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.498972 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-combined-ca-bundle\") pod \"f4d4710e-a3e8-493e-9f2c-38839187d587\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.499048 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d4710e-a3e8-493e-9f2c-38839187d587-logs" (OuterVolumeSpecName: "logs") pod "f4d4710e-a3e8-493e-9f2c-38839187d587" (UID: "f4d4710e-a3e8-493e-9f2c-38839187d587"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.499081 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-config-data\") pod \"f4d4710e-a3e8-493e-9f2c-38839187d587\" (UID: \"f4d4710e-a3e8-493e-9f2c-38839187d587\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.499104 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-db-sync-config-data\") pod \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\" (UID: \"b0c0c709-cb21-49e0-ba23-211f0cd1749d\") " Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.500517 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvd76\" (UniqueName: \"kubernetes.io/projected/1346b759-25b3-41df-9fa0-b7b1137fce00-kube-api-access-bvd76\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.500544 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.500557 4770 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.500567 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.500577 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.500586 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d4710e-a3e8-493e-9f2c-38839187d587-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.500597 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1346b759-25b3-41df-9fa0-b7b1137fce00-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.502427 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-scripts" (OuterVolumeSpecName: "scripts") pod "f4d4710e-a3e8-493e-9f2c-38839187d587" (UID: "f4d4710e-a3e8-493e-9f2c-38839187d587"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.503204 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d4710e-a3e8-493e-9f2c-38839187d587-kube-api-access-6h5gd" (OuterVolumeSpecName: "kube-api-access-6h5gd") pod "f4d4710e-a3e8-493e-9f2c-38839187d587" (UID: "f4d4710e-a3e8-493e-9f2c-38839187d587"). InnerVolumeSpecName "kube-api-access-6h5gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.504068 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c0c709-cb21-49e0-ba23-211f0cd1749d-kube-api-access-zl8gd" (OuterVolumeSpecName: "kube-api-access-zl8gd") pod "b0c0c709-cb21-49e0-ba23-211f0cd1749d" (UID: "b0c0c709-cb21-49e0-ba23-211f0cd1749d"). InnerVolumeSpecName "kube-api-access-zl8gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.504363 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b0c0c709-cb21-49e0-ba23-211f0cd1749d" (UID: "b0c0c709-cb21-49e0-ba23-211f0cd1749d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.525603 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0c0c709-cb21-49e0-ba23-211f0cd1749d" (UID: "b0c0c709-cb21-49e0-ba23-211f0cd1749d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.529894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4d4710e-a3e8-493e-9f2c-38839187d587" (UID: "f4d4710e-a3e8-493e-9f2c-38839187d587"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.532125 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-config-data" (OuterVolumeSpecName: "config-data") pod "f4d4710e-a3e8-493e-9f2c-38839187d587" (UID: "f4d4710e-a3e8-493e-9f2c-38839187d587"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.548228 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-config-data" (OuterVolumeSpecName: "config-data") pod "b0c0c709-cb21-49e0-ba23-211f0cd1749d" (UID: "b0c0c709-cb21-49e0-ba23-211f0cd1749d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.601911 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h5gd\" (UniqueName: \"kubernetes.io/projected/f4d4710e-a3e8-493e-9f2c-38839187d587-kube-api-access-6h5gd\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.601939 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.601948 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.601958 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.601966 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4710e-a3e8-493e-9f2c-38839187d587-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.601974 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.601982 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl8gd\" (UniqueName: \"kubernetes.io/projected/b0c0c709-cb21-49e0-ba23-211f0cd1749d-kube-api-access-zl8gd\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.601992 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c0c709-cb21-49e0-ba23-211f0cd1749d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.726651 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7m5xq" event={"ID":"f4d4710e-a3e8-493e-9f2c-38839187d587","Type":"ContainerDied","Data":"4808f9d6bdb9565864a7344d4d1e28b9887b5cd3577a310c9e4bdf8cb8db21e7"} Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.727028 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4808f9d6bdb9565864a7344d4d1e28b9887b5cd3577a310c9e4bdf8cb8db21e7" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.726846 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7m5xq" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.730021 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gbddk" event={"ID":"b0c0c709-cb21-49e0-ba23-211f0cd1749d","Type":"ContainerDied","Data":"615d5bac356118778c4074709303a00380944b5f723626ca4e3e626cc3e31d5c"} Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.730075 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="615d5bac356118778c4074709303a00380944b5f723626ca4e3e626cc3e31d5c" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.730043 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gbddk" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.733592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vg222" event={"ID":"1346b759-25b3-41df-9fa0-b7b1137fce00","Type":"ContainerDied","Data":"802f63bb3ea2d506cccd7e017d54dc6b5b4d8ac83090df691cb046dc4a9c2e95"} Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.733634 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802f63bb3ea2d506cccd7e017d54dc6b5b4d8ac83090df691cb046dc4a9c2e95" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.733711 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vg222" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.914906 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b79f96648-bcj77"] Dec 09 14:47:22 crc kubenswrapper[4770]: E1209 14:47:22.915404 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="registry-server" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915430 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="registry-server" Dec 09 14:47:22 crc kubenswrapper[4770]: E1209 14:47:22.915455 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="extract-content" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915464 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="extract-content" Dec 09 14:47:22 crc kubenswrapper[4770]: E1209 14:47:22.915477 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d4710e-a3e8-493e-9f2c-38839187d587" containerName="placement-db-sync" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915485 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d4710e-a3e8-493e-9f2c-38839187d587" containerName="placement-db-sync" Dec 09 14:47:22 crc kubenswrapper[4770]: E1209 14:47:22.915499 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915508 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" Dec 09 14:47:22 crc kubenswrapper[4770]: E1209 14:47:22.915540 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c0c709-cb21-49e0-ba23-211f0cd1749d" containerName="glance-db-sync" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915551 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c0c709-cb21-49e0-ba23-211f0cd1749d" containerName="glance-db-sync" Dec 09 14:47:22 crc kubenswrapper[4770]: E1209 14:47:22.915561 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1346b759-25b3-41df-9fa0-b7b1137fce00" containerName="keystone-bootstrap" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915569 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1346b759-25b3-41df-9fa0-b7b1137fce00" containerName="keystone-bootstrap" Dec 09 14:47:22 crc kubenswrapper[4770]: E1209 14:47:22.915584 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="extract-utilities" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915593 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="extract-utilities" Dec 09 14:47:22 crc kubenswrapper[4770]: E1209 14:47:22.915607 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="init" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915615 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="init" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915844 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c060ba-3638-443d-9c49-f097eaf5eb62" containerName="registry-server" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915867 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d4710e-a3e8-493e-9f2c-38839187d587" containerName="placement-db-sync" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915883 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1346b759-25b3-41df-9fa0-b7b1137fce00" containerName="keystone-bootstrap" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915900 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c0c709-cb21-49e0-ba23-211f0cd1749d" containerName="glance-db-sync" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.915915 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22da327-1d9a-49b8-b67a-317118888295" containerName="dnsmasq-dns" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.916769 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.924519 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.924828 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.926074 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.926289 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fnhpf" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.927117 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.931028 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.932377 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b79f96648-bcj77"] Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.955542 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85564fd668-bwn85"] Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.957301 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.964842 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.965018 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.965155 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.965316 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.965526 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bzfdg" Dec 09 14:47:22 crc kubenswrapper[4770]: I1209 14:47:22.982506 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85564fd668-bwn85"] Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.012369 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-combined-ca-bundle\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.012423 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-public-tls-certs\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.012556 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-fernet-keys\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.012642 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5lm2\" (UniqueName: \"kubernetes.io/projected/34aa6241-b9e1-45b1-915b-2de7e264e2a0-kube-api-access-m5lm2\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.012820 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-credential-keys\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.013149 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-scripts\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.013227 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-internal-tls-certs\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.013334 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-config-data\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.115608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-combined-ca-bundle\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.115758 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-scripts\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.115812 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-public-tls-certs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.115839 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-internal-tls-certs\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.115870 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-scripts\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.115926 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-config-data\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.115965 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-combined-ca-bundle\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.116051 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-internal-tls-certs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.116120 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-public-tls-certs\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.116153 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-config-data\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.116189 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bac9177-f03c-4a16-b2c4-da456883ca22-logs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.116252 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7knjd\" (UniqueName: \"kubernetes.io/projected/6bac9177-f03c-4a16-b2c4-da456883ca22-kube-api-access-7knjd\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.116286 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-fernet-keys\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.116330 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5lm2\" (UniqueName: \"kubernetes.io/projected/34aa6241-b9e1-45b1-915b-2de7e264e2a0-kube-api-access-m5lm2\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.116372 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-credential-keys\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.127151 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-scripts\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.127587 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-internal-tls-certs\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.128146 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-credential-keys\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.147302 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-public-tls-certs\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.152349 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-fernet-keys\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.152542 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5lm2\" (UniqueName: \"kubernetes.io/projected/34aa6241-b9e1-45b1-915b-2de7e264e2a0-kube-api-access-m5lm2\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.156326 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-combined-ca-bundle\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.157817 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34aa6241-b9e1-45b1-915b-2de7e264e2a0-config-data\") pod \"keystone-b79f96648-bcj77\" (UID: \"34aa6241-b9e1-45b1-915b-2de7e264e2a0\") " pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.194847 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qjzbv"] Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.196884 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.217916 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-public-tls-certs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.218229 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-scripts\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.218411 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-internal-tls-certs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.218957 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-config-data\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.219160 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bac9177-f03c-4a16-b2c4-da456883ca22-logs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.219294 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7knjd\" (UniqueName: \"kubernetes.io/projected/6bac9177-f03c-4a16-b2c4-da456883ca22-kube-api-access-7knjd\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.219490 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-combined-ca-bundle\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.221282 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bac9177-f03c-4a16-b2c4-da456883ca22-logs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.224989 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-config-data\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.227395 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-public-tls-certs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.229297 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qjzbv"] Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.241337 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.241449 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-internal-tls-certs\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.241482 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-combined-ca-bundle\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.246275 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7knjd\" (UniqueName: \"kubernetes.io/projected/6bac9177-f03c-4a16-b2c4-da456883ca22-kube-api-access-7knjd\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.251114 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bac9177-f03c-4a16-b2c4-da456883ca22-scripts\") pod \"placement-85564fd668-bwn85\" (UID: \"6bac9177-f03c-4a16-b2c4-da456883ca22\") " pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.285516 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.323746 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.324006 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.324046 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.324562 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdt49\" (UniqueName: \"kubernetes.io/projected/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-kube-api-access-jdt49\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.324601 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-config\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.324649 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.426122 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.426507 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.426615 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.426646 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.426760 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdt49\" (UniqueName: \"kubernetes.io/projected/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-kube-api-access-jdt49\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.426784 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-config\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.427929 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-config\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.427996 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.428049 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.428307 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.428456 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.449793 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdt49\" (UniqueName: \"kubernetes.io/projected/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-kube-api-access-jdt49\") pod \"dnsmasq-dns-56df8fb6b7-qjzbv\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.560133 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.866317 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b79f96648-bcj77"] Dec 09 14:47:23 crc kubenswrapper[4770]: I1209 14:47:23.884162 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85564fd668-bwn85"] Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.138269 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qjzbv"] Dec 09 14:47:24 crc kubenswrapper[4770]: W1209 14:47:24.177434 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8d79aac_9bb5_48a9_b9e0_2650f9285aaf.slice/crio-8f7d61c5b01715f7959e1b493be01159a7f56fc0937d76d14a0a914cbcfe8d92 WatchSource:0}: Error finding container 8f7d61c5b01715f7959e1b493be01159a7f56fc0937d76d14a0a914cbcfe8d92: Status 404 returned error can't find the container with id 8f7d61c5b01715f7959e1b493be01159a7f56fc0937d76d14a0a914cbcfe8d92 Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.231941 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.239271 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.242161 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.243192 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.244349 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-c7ww5" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.261257 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.346136 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.347024 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.347178 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.347329 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.347474 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-logs\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.347640 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.347823 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mb4\" (UniqueName: \"kubernetes.io/projected/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-kube-api-access-57mb4\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.422770 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.451056 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mb4\" (UniqueName: \"kubernetes.io/projected/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-kube-api-access-57mb4\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.451112 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.451223 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.451267 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.451307 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.451375 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-logs\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.451409 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.453337 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.454663 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-logs\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.454801 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.458646 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.458697 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4d3471d01ab9f49c6d7e7ed17e2f2c5045d7b7d89c846663711f4829da0cc6bb/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.469894 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mb4\" (UniqueName: \"kubernetes.io/projected/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-kube-api-access-57mb4\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.481893 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.482893 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.490224 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.490408 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.493021 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.549062 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.561333 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.663514 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.664561 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.664628 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.691329 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtwfd\" (UniqueName: \"kubernetes.io/projected/6221194b-80af-406a-acdd-f2fac253ae6e-kube-api-access-dtwfd\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.691458 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.691940 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.693055 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.770827 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" event={"ID":"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf","Type":"ContainerStarted","Data":"8f7d61c5b01715f7959e1b493be01159a7f56fc0937d76d14a0a914cbcfe8d92"} Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.772965 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85564fd668-bwn85" event={"ID":"6bac9177-f03c-4a16-b2c4-da456883ca22","Type":"ContainerStarted","Data":"2f051111d337da0708ba10ed370fe9a62cd86e5fc5b765fb934817c03a2ff4b0"} Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.773978 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b79f96648-bcj77" event={"ID":"34aa6241-b9e1-45b1-915b-2de7e264e2a0","Type":"ContainerStarted","Data":"ec04ea5547b2f611b40a1089e43ee8e60907e12db5e067c8c0a1303913573811"} Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.800170 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.800639 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.800682 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.800850 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.800893 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.800929 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.800974 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtwfd\" (UniqueName: \"kubernetes.io/projected/6221194b-80af-406a-acdd-f2fac253ae6e-kube-api-access-dtwfd\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.801089 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.801850 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.806159 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.806171 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.822926 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtwfd\" (UniqueName: \"kubernetes.io/projected/6221194b-80af-406a-acdd-f2fac253ae6e-kube-api-access-dtwfd\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.826885 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.827807 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.827841 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6216abc33ee4323cfd7a1b0602636549bc5b0a652afc01bbb6b33ecbaa1f00d0/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 09 14:47:24 crc kubenswrapper[4770]: I1209 14:47:24.868849 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:25 crc kubenswrapper[4770]: I1209 14:47:25.010103 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:47:26 crc kubenswrapper[4770]: I1209 14:47:26.948524 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:27 crc kubenswrapper[4770]: I1209 14:47:27.020315 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:27 crc kubenswrapper[4770]: W1209 14:47:27.092418 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6221194b_80af_406a_acdd_f2fac253ae6e.slice/crio-a9433f4b4d2e4165267c0bb5f00d819b6ef13f031aef4403c1b0d70bda9bf47b WatchSource:0}: Error finding container a9433f4b4d2e4165267c0bb5f00d819b6ef13f031aef4403c1b0d70bda9bf47b: Status 404 returned error can't find the container with id a9433f4b4d2e4165267c0bb5f00d819b6ef13f031aef4403c1b0d70bda9bf47b Dec 09 14:47:27 crc kubenswrapper[4770]: I1209 14:47:27.109432 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:27 crc kubenswrapper[4770]: I1209 14:47:27.311670 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:27 crc kubenswrapper[4770]: W1209 14:47:27.324081 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8cae7a9_91c9_4280_a1b0_fe4e0f8d43fe.slice/crio-7c7e300bdec709beb4c3ac98479d8367263ea66a79d5d6ed8e4056f37c75e3e0 WatchSource:0}: Error finding container 7c7e300bdec709beb4c3ac98479d8367263ea66a79d5d6ed8e4056f37c75e3e0: Status 404 returned error can't find the container with id 7c7e300bdec709beb4c3ac98479d8367263ea66a79d5d6ed8e4056f37c75e3e0 Dec 09 14:47:27 crc kubenswrapper[4770]: I1209 14:47:27.814011 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6221194b-80af-406a-acdd-f2fac253ae6e","Type":"ContainerStarted","Data":"a9433f4b4d2e4165267c0bb5f00d819b6ef13f031aef4403c1b0d70bda9bf47b"} Dec 09 14:47:27 crc kubenswrapper[4770]: I1209 14:47:27.815164 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe","Type":"ContainerStarted","Data":"7c7e300bdec709beb4c3ac98479d8367263ea66a79d5d6ed8e4056f37c75e3e0"} Dec 09 14:47:27 crc kubenswrapper[4770]: I1209 14:47:27.816435 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" event={"ID":"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf","Type":"ContainerStarted","Data":"57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339"} Dec 09 14:47:28 crc kubenswrapper[4770]: I1209 14:47:28.838608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6221194b-80af-406a-acdd-f2fac253ae6e","Type":"ContainerStarted","Data":"6e6b83ed7f4e152886a213059d744dccf91c3e43e6e3643d7d67a79ebfadc737"} Dec 09 14:47:28 crc kubenswrapper[4770]: I1209 14:47:28.841695 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe","Type":"ContainerStarted","Data":"40b14658ea4921ed7ff586eeace8b272a4d2878945ac2af528db20c57aac32c4"} Dec 09 14:47:28 crc kubenswrapper[4770]: I1209 14:47:28.847577 4770 generic.go:334] "Generic (PLEG): container finished" podID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" containerID="57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339" exitCode=0 Dec 09 14:47:28 crc kubenswrapper[4770]: I1209 14:47:28.847720 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" event={"ID":"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf","Type":"ContainerDied","Data":"57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339"} Dec 09 14:47:28 crc kubenswrapper[4770]: I1209 14:47:28.855830 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85564fd668-bwn85" event={"ID":"6bac9177-f03c-4a16-b2c4-da456883ca22","Type":"ContainerStarted","Data":"101c5e2fec11adb6c04435af230116545413096212cc154c372781ece74cf43b"} Dec 09 14:47:28 crc kubenswrapper[4770]: I1209 14:47:28.860740 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b79f96648-bcj77" event={"ID":"34aa6241-b9e1-45b1-915b-2de7e264e2a0","Type":"ContainerStarted","Data":"500d2ad3399edf350f5709ad86e963e52a2f4509c9832224a3445804627180a7"} Dec 09 14:47:28 crc kubenswrapper[4770]: I1209 14:47:28.860914 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:28 crc kubenswrapper[4770]: I1209 14:47:28.897217 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b79f96648-bcj77" podStartSLOduration=6.897191337 podStartE2EDuration="6.897191337s" podCreationTimestamp="2025-12-09 14:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:28.88721764 +0000 UTC m=+1480.783419766" watchObservedRunningTime="2025-12-09 14:47:28.897191337 +0000 UTC m=+1480.793393463" Dec 09 14:47:37 crc kubenswrapper[4770]: I1209 14:47:31.902400 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85564fd668-bwn85" event={"ID":"6bac9177-f03c-4a16-b2c4-da456883ca22","Type":"ContainerStarted","Data":"c9952e6079c6a9aae685805eccd3239035f02bccc1d8383b68a6a24643716476"} Dec 09 14:47:37 crc kubenswrapper[4770]: I1209 14:47:37.151137 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6221194b-80af-406a-acdd-f2fac253ae6e","Type":"ContainerStarted","Data":"57d17cc6fe7ea7ce7ed0cc522e271262c55f9cccf5eef1455480fa7f043ac543"} Dec 09 14:47:38 crc kubenswrapper[4770]: I1209 14:47:38.167507 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe","Type":"ContainerStarted","Data":"34a67c585437d0e0a179e602403a9941155c35b7f22c198c053da536a91a0043"} Dec 09 14:47:43 crc kubenswrapper[4770]: I1209 14:47:43.230303 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:43 crc kubenswrapper[4770]: I1209 14:47:43.230972 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:43 crc kubenswrapper[4770]: I1209 14:47:43.254943 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85564fd668-bwn85" podStartSLOduration=21.254922412 podStartE2EDuration="21.254922412s" podCreationTimestamp="2025-12-09 14:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:43.252317833 +0000 UTC m=+1495.148519969" watchObservedRunningTime="2025-12-09 14:47:43.254922412 +0000 UTC m=+1495.151124548" Dec 09 14:47:44 crc kubenswrapper[4770]: I1209 14:47:44.238643 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerName="glance-httpd" containerID="cri-o://34a67c585437d0e0a179e602403a9941155c35b7f22c198c053da536a91a0043" gracePeriod=30 Dec 09 14:47:44 crc kubenswrapper[4770]: I1209 14:47:44.238717 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" containerName="glance-log" containerID="cri-o://6e6b83ed7f4e152886a213059d744dccf91c3e43e6e3643d7d67a79ebfadc737" gracePeriod=30 Dec 09 14:47:44 crc kubenswrapper[4770]: I1209 14:47:44.238872 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" containerName="glance-httpd" containerID="cri-o://57d17cc6fe7ea7ce7ed0cc522e271262c55f9cccf5eef1455480fa7f043ac543" gracePeriod=30 Dec 09 14:47:44 crc kubenswrapper[4770]: I1209 14:47:44.238659 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerName="glance-log" containerID="cri-o://40b14658ea4921ed7ff586eeace8b272a4d2878945ac2af528db20c57aac32c4" gracePeriod=30 Dec 09 14:47:44 crc kubenswrapper[4770]: I1209 14:47:44.265831 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.265815318 podStartE2EDuration="21.265815318s" podCreationTimestamp="2025-12-09 14:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:44.265140441 +0000 UTC m=+1496.161342577" watchObservedRunningTime="2025-12-09 14:47:44.265815318 +0000 UTC m=+1496.162017454" Dec 09 14:47:44 crc kubenswrapper[4770]: I1209 14:47:44.295412 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.295393549 podStartE2EDuration="21.295393549s" podCreationTimestamp="2025-12-09 14:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:44.289608013 +0000 UTC m=+1496.185810159" watchObservedRunningTime="2025-12-09 14:47:44.295393549 +0000 UTC m=+1496.191595685" Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.260493 4770 generic.go:334] "Generic (PLEG): container finished" podID="6221194b-80af-406a-acdd-f2fac253ae6e" containerID="57d17cc6fe7ea7ce7ed0cc522e271262c55f9cccf5eef1455480fa7f043ac543" exitCode=0 Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.261255 4770 generic.go:334] "Generic (PLEG): container finished" podID="6221194b-80af-406a-acdd-f2fac253ae6e" containerID="6e6b83ed7f4e152886a213059d744dccf91c3e43e6e3643d7d67a79ebfadc737" exitCode=143 Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.260568 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6221194b-80af-406a-acdd-f2fac253ae6e","Type":"ContainerDied","Data":"57d17cc6fe7ea7ce7ed0cc522e271262c55f9cccf5eef1455480fa7f043ac543"} Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.261330 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6221194b-80af-406a-acdd-f2fac253ae6e","Type":"ContainerDied","Data":"6e6b83ed7f4e152886a213059d744dccf91c3e43e6e3643d7d67a79ebfadc737"} Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.262874 4770 generic.go:334] "Generic (PLEG): container finished" podID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerID="40b14658ea4921ed7ff586eeace8b272a4d2878945ac2af528db20c57aac32c4" exitCode=143 Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.262906 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe","Type":"ContainerDied","Data":"40b14658ea4921ed7ff586eeace8b272a4d2878945ac2af528db20c57aac32c4"} Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.940642 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.940755 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:47:46 crc kubenswrapper[4770]: I1209 14:47:46.941168 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85564fd668-bwn85" Dec 09 14:47:47 crc kubenswrapper[4770]: I1209 14:47:47.292019 4770 generic.go:334] "Generic (PLEG): container finished" podID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerID="34a67c585437d0e0a179e602403a9941155c35b7f22c198c053da536a91a0043" exitCode=0 Dec 09 14:47:47 crc kubenswrapper[4770]: I1209 14:47:47.292144 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe","Type":"ContainerDied","Data":"34a67c585437d0e0a179e602403a9941155c35b7f22c198c053da536a91a0043"} Dec 09 14:47:48 crc kubenswrapper[4770]: E1209 14:47:48.949554 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 09 14:47:48 crc kubenswrapper[4770]: E1209 14:47:48.950361 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnpcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7af45895-9497-4517-a8d8-56a64510ac72): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 14:47:48 crc kubenswrapper[4770]: E1209 14:47:48.951711 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="7af45895-9497-4517-a8d8-56a64510ac72" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.019354 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.179620 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.180057 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-logs\") pod \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.180277 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-config-data\") pod \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.180349 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mb4\" (UniqueName: \"kubernetes.io/projected/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-kube-api-access-57mb4\") pod \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.180408 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-httpd-run\") pod \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.180430 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-logs" (OuterVolumeSpecName: "logs") pod "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" (UID: "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.180479 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-scripts\") pod \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.180519 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-combined-ca-bundle\") pod \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\" (UID: \"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.181025 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" (UID: "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.181191 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.181215 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.186606 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-scripts" (OuterVolumeSpecName: "scripts") pod "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" (UID: "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.186690 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-kube-api-access-57mb4" (OuterVolumeSpecName: "kube-api-access-57mb4") pod "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" (UID: "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe"). InnerVolumeSpecName "kube-api-access-57mb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.210406 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658" (OuterVolumeSpecName: "glance") pod "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" (UID: "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe"). InnerVolumeSpecName "pvc-6ef0c204-ea23-4577-89b0-b667aeced658". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.247709 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" (UID: "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.265960 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-config-data" (OuterVolumeSpecName: "config-data") pod "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" (UID: "d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.282708 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.282748 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.282778 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") on node \"crc\" " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.282790 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.282800 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mb4\" (UniqueName: \"kubernetes.io/projected/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe-kube-api-access-57mb4\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.325580 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.325802 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ef0c204-ea23-4577-89b0-b667aeced658" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658") on node "crc" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.332406 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.332474 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe","Type":"ContainerDied","Data":"7c7e300bdec709beb4c3ac98479d8367263ea66a79d5d6ed8e4056f37c75e3e0"} Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.332530 4770 scope.go:117] "RemoveContainer" containerID="34a67c585437d0e0a179e602403a9941155c35b7f22c198c053da536a91a0043" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.332919 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7af45895-9497-4517-a8d8-56a64510ac72" containerName="ceilometer-notification-agent" containerID="cri-o://9f869192beb541bee58388bbc97590eb9af7eaa1cfa9e7849ed66e6871204ac6" gracePeriod=30 Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.332996 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7af45895-9497-4517-a8d8-56a64510ac72" containerName="sg-core" containerID="cri-o://15d5989033f41b50ebe88317af224c7c7eda49f0d83907739d3f5f6a18e7e040" gracePeriod=30 Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.340551 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.381253 4770 scope.go:117] "RemoveContainer" containerID="40b14658ea4921ed7ff586eeace8b272a4d2878945ac2af528db20c57aac32c4" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.384177 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.415624 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.431895 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.446447 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:49 crc kubenswrapper[4770]: E1209 14:47:49.448106 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" containerName="glance-log" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.448214 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" containerName="glance-log" Dec 09 14:47:49 crc kubenswrapper[4770]: E1209 14:47:49.448307 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerName="glance-httpd" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.448395 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerName="glance-httpd" Dec 09 14:47:49 crc kubenswrapper[4770]: E1209 14:47:49.455700 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" containerName="glance-httpd" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.456975 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" containerName="glance-httpd" Dec 09 14:47:49 crc kubenswrapper[4770]: E1209 14:47:49.459268 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerName="glance-log" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.459416 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerName="glance-log" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.459917 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" containerName="glance-httpd" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.460030 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" containerName="glance-log" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.460097 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerName="glance-httpd" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.460153 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" containerName="glance-log" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.462670 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.466577 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.466910 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.472504 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.494197 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"6221194b-80af-406a-acdd-f2fac253ae6e\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.494228 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-combined-ca-bundle\") pod \"6221194b-80af-406a-acdd-f2fac253ae6e\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.494335 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-logs\") pod \"6221194b-80af-406a-acdd-f2fac253ae6e\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.494392 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-config-data\") pod \"6221194b-80af-406a-acdd-f2fac253ae6e\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.494480 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-scripts\") pod \"6221194b-80af-406a-acdd-f2fac253ae6e\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.494501 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtwfd\" (UniqueName: \"kubernetes.io/projected/6221194b-80af-406a-acdd-f2fac253ae6e-kube-api-access-dtwfd\") pod \"6221194b-80af-406a-acdd-f2fac253ae6e\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.494537 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-httpd-run\") pod \"6221194b-80af-406a-acdd-f2fac253ae6e\" (UID: \"6221194b-80af-406a-acdd-f2fac253ae6e\") " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.495582 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-logs" (OuterVolumeSpecName: "logs") pod "6221194b-80af-406a-acdd-f2fac253ae6e" (UID: "6221194b-80af-406a-acdd-f2fac253ae6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.495919 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6221194b-80af-406a-acdd-f2fac253ae6e" (UID: "6221194b-80af-406a-acdd-f2fac253ae6e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.517923 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-scripts" (OuterVolumeSpecName: "scripts") pod "6221194b-80af-406a-acdd-f2fac253ae6e" (UID: "6221194b-80af-406a-acdd-f2fac253ae6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.518154 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6221194b-80af-406a-acdd-f2fac253ae6e-kube-api-access-dtwfd" (OuterVolumeSpecName: "kube-api-access-dtwfd") pod "6221194b-80af-406a-acdd-f2fac253ae6e" (UID: "6221194b-80af-406a-acdd-f2fac253ae6e"). InnerVolumeSpecName "kube-api-access-dtwfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.527573 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c" (OuterVolumeSpecName: "glance") pod "6221194b-80af-406a-acdd-f2fac253ae6e" (UID: "6221194b-80af-406a-acdd-f2fac253ae6e"). InnerVolumeSpecName "pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.558543 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6221194b-80af-406a-acdd-f2fac253ae6e" (UID: "6221194b-80af-406a-acdd-f2fac253ae6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.597204 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-logs\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.599670 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-scripts\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.602345 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89bhp\" (UniqueName: \"kubernetes.io/projected/91d4d708-1c18-4827-92eb-349b5eaf6d2f-kube-api-access-89bhp\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.602567 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.602766 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.603099 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.603276 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.603535 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-config-data\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.604057 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.604178 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtwfd\" (UniqueName: \"kubernetes.io/projected/6221194b-80af-406a-acdd-f2fac253ae6e-kube-api-access-dtwfd\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.604396 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.604707 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") on node \"crc\" " Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.604893 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.605413 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6221194b-80af-406a-acdd-f2fac253ae6e-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.614406 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-config-data" (OuterVolumeSpecName: "config-data") pod "6221194b-80af-406a-acdd-f2fac253ae6e" (UID: "6221194b-80af-406a-acdd-f2fac253ae6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.648747 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.649082 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c") on node "crc" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.707428 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-logs\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.707840 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-scripts\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.707966 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89bhp\" (UniqueName: \"kubernetes.io/projected/91d4d708-1c18-4827-92eb-349b5eaf6d2f-kube-api-access-89bhp\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.708048 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.708153 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.708240 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.708351 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.709494 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-config-data\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.709770 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.709910 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6221194b-80af-406a-acdd-f2fac253ae6e-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.710565 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.711859 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.711887 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4d3471d01ab9f49c6d7e7ed17e2f2c5045d7b7d89c846663711f4829da0cc6bb/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.712329 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-logs\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.713919 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.716624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-scripts\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.718425 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-config-data\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.719082 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.727301 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89bhp\" (UniqueName: \"kubernetes.io/projected/91d4d708-1c18-4827-92eb-349b5eaf6d2f-kube-api-access-89bhp\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.757542 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " pod="openstack/glance-default-external-api-0" Dec 09 14:47:49 crc kubenswrapper[4770]: I1209 14:47:49.865024 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.346988 4770 generic.go:334] "Generic (PLEG): container finished" podID="efd509c2-c0ac-450e-84d3-14e9e8935f1c" containerID="acb47c1355df5a7b18e5cccaf6e6aa2b38b84a39db33f01c8e8af0173cea0f8f" exitCode=0 Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.347072 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bwk2j" event={"ID":"efd509c2-c0ac-450e-84d3-14e9e8935f1c","Type":"ContainerDied","Data":"acb47c1355df5a7b18e5cccaf6e6aa2b38b84a39db33f01c8e8af0173cea0f8f"} Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.359780 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" event={"ID":"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf","Type":"ContainerStarted","Data":"29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc"} Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.360786 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.362837 4770 generic.go:334] "Generic (PLEG): container finished" podID="7af45895-9497-4517-a8d8-56a64510ac72" containerID="15d5989033f41b50ebe88317af224c7c7eda49f0d83907739d3f5f6a18e7e040" exitCode=2 Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.362864 4770 generic.go:334] "Generic (PLEG): container finished" podID="7af45895-9497-4517-a8d8-56a64510ac72" containerID="9f869192beb541bee58388bbc97590eb9af7eaa1cfa9e7849ed66e6871204ac6" exitCode=0 Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.362914 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7af45895-9497-4517-a8d8-56a64510ac72","Type":"ContainerDied","Data":"15d5989033f41b50ebe88317af224c7c7eda49f0d83907739d3f5f6a18e7e040"} Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.362936 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7af45895-9497-4517-a8d8-56a64510ac72","Type":"ContainerDied","Data":"9f869192beb541bee58388bbc97590eb9af7eaa1cfa9e7849ed66e6871204ac6"} Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.365649 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-9vv8k" event={"ID":"fbfdafc8-508d-4eec-9496-7058a6d1d49b","Type":"ContainerStarted","Data":"d64c4f4bca7327a10af13f66da55bb6ae201727cf953df708f837e0c525fe72f"} Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.371809 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dpmnb" event={"ID":"b7669f5b-7406-4ef5-833b-f69821551b08","Type":"ContainerStarted","Data":"c3f5c3aff4d7cd88503269c521e5f346c17559e69d1db9d9b3d4514c0aaf414b"} Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.380040 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.381231 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6221194b-80af-406a-acdd-f2fac253ae6e","Type":"ContainerDied","Data":"a9433f4b4d2e4165267c0bb5f00d819b6ef13f031aef4403c1b0d70bda9bf47b"} Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.381303 4770 scope.go:117] "RemoveContainer" containerID="57d17cc6fe7ea7ce7ed0cc522e271262c55f9cccf5eef1455480fa7f043ac543" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.383767 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-k7gz4" event={"ID":"017d5a4f-99ba-4d3f-9053-207e6f414ab1","Type":"ContainerStarted","Data":"bb6c72299af868fc9e19c08ee82ba82f5f78117c5f167dfe7a3996fb401c3ff9"} Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.407685 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" podStartSLOduration=27.407664577 podStartE2EDuration="27.407664577s" podCreationTimestamp="2025-12-09 14:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:50.391509627 +0000 UTC m=+1502.287711783" watchObservedRunningTime="2025-12-09 14:47:50.407664577 +0000 UTC m=+1502.303866713" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.414178 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-9vv8k" podStartSLOduration=3.255851003 podStartE2EDuration="1m30.414158851s" podCreationTimestamp="2025-12-09 14:46:20 +0000 UTC" firstStartedPulling="2025-12-09 14:46:21.939592742 +0000 UTC m=+1413.835794878" lastFinishedPulling="2025-12-09 14:47:49.09790059 +0000 UTC m=+1500.994102726" observedRunningTime="2025-12-09 14:47:50.407230706 +0000 UTC m=+1502.303432852" watchObservedRunningTime="2025-12-09 14:47:50.414158851 +0000 UTC m=+1502.310360997" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.421382 4770 scope.go:117] "RemoveContainer" containerID="6e6b83ed7f4e152886a213059d744dccf91c3e43e6e3643d7d67a79ebfadc737" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.459206 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-k7gz4" podStartSLOduration=3.3094329350000002 podStartE2EDuration="1m30.459185614s" podCreationTimestamp="2025-12-09 14:46:20 +0000 UTC" firstStartedPulling="2025-12-09 14:46:21.948153711 +0000 UTC m=+1413.844355847" lastFinishedPulling="2025-12-09 14:47:49.09790639 +0000 UTC m=+1500.994108526" observedRunningTime="2025-12-09 14:47:50.424675101 +0000 UTC m=+1502.320877237" watchObservedRunningTime="2025-12-09 14:47:50.459185614 +0000 UTC m=+1502.355387750" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.495084 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dpmnb" podStartSLOduration=3.021441835 podStartE2EDuration="1m30.495064992s" podCreationTimestamp="2025-12-09 14:46:20 +0000 UTC" firstStartedPulling="2025-12-09 14:46:21.621893129 +0000 UTC m=+1413.518095265" lastFinishedPulling="2025-12-09 14:47:49.095516286 +0000 UTC m=+1500.991718422" observedRunningTime="2025-12-09 14:47:50.445265822 +0000 UTC m=+1502.341467958" watchObservedRunningTime="2025-12-09 14:47:50.495064992 +0000 UTC m=+1502.391267128" Dec 09 14:47:50 crc kubenswrapper[4770]: W1209 14:47:50.508290 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91d4d708_1c18_4827_92eb_349b5eaf6d2f.slice/crio-897d30b8a1b34203472ecdc88c3229913b2ad99533004a64c4b34719b10927b6 WatchSource:0}: Error finding container 897d30b8a1b34203472ecdc88c3229913b2ad99533004a64c4b34719b10927b6: Status 404 returned error can't find the container with id 897d30b8a1b34203472ecdc88c3229913b2ad99533004a64c4b34719b10927b6 Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.533408 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.542586 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.551917 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.553898 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.556716 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.556955 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.563123 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.569486 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.569556 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.569601 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.569641 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.570072 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2w8x\" (UniqueName: \"kubernetes.io/projected/f628655f-ba3b-450b-8426-a5acfabd2759-kube-api-access-f2w8x\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.570126 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.570217 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-logs\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.570375 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.575767 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.612869 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6221194b-80af-406a-acdd-f2fac253ae6e" path="/var/lib/kubelet/pods/6221194b-80af-406a-acdd-f2fac253ae6e/volumes" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.613870 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe" path="/var/lib/kubelet/pods/d8cae7a9-91c9-4280-a1b0-fe4e0f8d43fe/volumes" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.626567 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.670953 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-log-httpd\") pod \"7af45895-9497-4517-a8d8-56a64510ac72\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.670997 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnpcs\" (UniqueName: \"kubernetes.io/projected/7af45895-9497-4517-a8d8-56a64510ac72-kube-api-access-vnpcs\") pod \"7af45895-9497-4517-a8d8-56a64510ac72\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.671427 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7af45895-9497-4517-a8d8-56a64510ac72" (UID: "7af45895-9497-4517-a8d8-56a64510ac72"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.671530 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-sg-core-conf-yaml\") pod \"7af45895-9497-4517-a8d8-56a64510ac72\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.672319 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-scripts\") pod \"7af45895-9497-4517-a8d8-56a64510ac72\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.672356 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-combined-ca-bundle\") pod \"7af45895-9497-4517-a8d8-56a64510ac72\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.672426 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-config-data\") pod \"7af45895-9497-4517-a8d8-56a64510ac72\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.672725 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-run-httpd\") pod \"7af45895-9497-4517-a8d8-56a64510ac72\" (UID: \"7af45895-9497-4517-a8d8-56a64510ac72\") " Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.672943 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7af45895-9497-4517-a8d8-56a64510ac72" (UID: "7af45895-9497-4517-a8d8-56a64510ac72"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673103 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2w8x\" (UniqueName: \"kubernetes.io/projected/f628655f-ba3b-450b-8426-a5acfabd2759-kube-api-access-f2w8x\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673154 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673231 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-logs\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673270 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673320 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673356 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673384 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673416 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673537 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.673550 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7af45895-9497-4517-a8d8-56a64510ac72-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.677129 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af45895-9497-4517-a8d8-56a64510ac72-kube-api-access-vnpcs" (OuterVolumeSpecName: "kube-api-access-vnpcs") pod "7af45895-9497-4517-a8d8-56a64510ac72" (UID: "7af45895-9497-4517-a8d8-56a64510ac72"). InnerVolumeSpecName "kube-api-access-vnpcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.677190 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-scripts" (OuterVolumeSpecName: "scripts") pod "7af45895-9497-4517-a8d8-56a64510ac72" (UID: "7af45895-9497-4517-a8d8-56a64510ac72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.677691 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-logs\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.678225 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.678383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.681218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.681569 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.681603 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6216abc33ee4323cfd7a1b0602636549bc5b0a652afc01bbb6b33ecbaa1f00d0/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.682506 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.690551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.693964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2w8x\" (UniqueName: \"kubernetes.io/projected/f628655f-ba3b-450b-8426-a5acfabd2759-kube-api-access-f2w8x\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.718857 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7af45895-9497-4517-a8d8-56a64510ac72" (UID: "7af45895-9497-4517-a8d8-56a64510ac72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.719138 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-config-data" (OuterVolumeSpecName: "config-data") pod "7af45895-9497-4517-a8d8-56a64510ac72" (UID: "7af45895-9497-4517-a8d8-56a64510ac72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.726900 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7af45895-9497-4517-a8d8-56a64510ac72" (UID: "7af45895-9497-4517-a8d8-56a64510ac72"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.736374 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.775006 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.775037 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.775046 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.775054 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af45895-9497-4517-a8d8-56a64510ac72-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.775064 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnpcs\" (UniqueName: \"kubernetes.io/projected/7af45895-9497-4517-a8d8-56a64510ac72-kube-api-access-vnpcs\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:50 crc kubenswrapper[4770]: I1209 14:47:50.922578 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.398270 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91d4d708-1c18-4827-92eb-349b5eaf6d2f","Type":"ContainerStarted","Data":"cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c"} Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.398720 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91d4d708-1c18-4827-92eb-349b5eaf6d2f","Type":"ContainerStarted","Data":"897d30b8a1b34203472ecdc88c3229913b2ad99533004a64c4b34719b10927b6"} Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.403218 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7af45895-9497-4517-a8d8-56a64510ac72","Type":"ContainerDied","Data":"aae06d4b854f01b6bffab73049e7ce497961f07ca87890aae0ec113ce5757b48"} Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.403320 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.403356 4770 scope.go:117] "RemoveContainer" containerID="15d5989033f41b50ebe88317af224c7c7eda49f0d83907739d3f5f6a18e7e040" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.433058 4770 scope.go:117] "RemoveContainer" containerID="9f869192beb541bee58388bbc97590eb9af7eaa1cfa9e7849ed66e6871204ac6" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.544685 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.565897 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.597527 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:47:51 crc kubenswrapper[4770]: E1209 14:47:51.598353 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af45895-9497-4517-a8d8-56a64510ac72" containerName="ceilometer-notification-agent" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.598379 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af45895-9497-4517-a8d8-56a64510ac72" containerName="ceilometer-notification-agent" Dec 09 14:47:51 crc kubenswrapper[4770]: E1209 14:47:51.598417 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af45895-9497-4517-a8d8-56a64510ac72" containerName="sg-core" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.598427 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af45895-9497-4517-a8d8-56a64510ac72" containerName="sg-core" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.598667 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af45895-9497-4517-a8d8-56a64510ac72" containerName="sg-core" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.598699 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af45895-9497-4517-a8d8-56a64510ac72" containerName="ceilometer-notification-agent" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.601058 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.604451 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.604619 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.624009 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.632969 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.804017 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.808196 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-log-httpd\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.808320 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.808404 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-config-data\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.808447 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2nl\" (UniqueName: \"kubernetes.io/projected/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-kube-api-access-qz2nl\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.808672 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-run-httpd\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.808737 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-scripts\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.808799 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.910337 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-combined-ca-bundle\") pod \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.910388 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-577pq\" (UniqueName: \"kubernetes.io/projected/efd509c2-c0ac-450e-84d3-14e9e8935f1c-kube-api-access-577pq\") pod \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.910449 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-config\") pod \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\" (UID: \"efd509c2-c0ac-450e-84d3-14e9e8935f1c\") " Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.911021 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.911084 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-config-data\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.911114 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2nl\" (UniqueName: \"kubernetes.io/projected/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-kube-api-access-qz2nl\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.911191 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-run-httpd\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.911218 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-scripts\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.911249 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.911319 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-log-httpd\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.911806 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-log-httpd\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.913779 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-run-httpd\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.919464 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.919519 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd509c2-c0ac-450e-84d3-14e9e8935f1c-kube-api-access-577pq" (OuterVolumeSpecName: "kube-api-access-577pq") pod "efd509c2-c0ac-450e-84d3-14e9e8935f1c" (UID: "efd509c2-c0ac-450e-84d3-14e9e8935f1c"). InnerVolumeSpecName "kube-api-access-577pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.919658 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.922909 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-config-data\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.928392 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-scripts\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.934116 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2nl\" (UniqueName: \"kubernetes.io/projected/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-kube-api-access-qz2nl\") pod \"ceilometer-0\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " pod="openstack/ceilometer-0" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.953802 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-config" (OuterVolumeSpecName: "config") pod "efd509c2-c0ac-450e-84d3-14e9e8935f1c" (UID: "efd509c2-c0ac-450e-84d3-14e9e8935f1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:51 crc kubenswrapper[4770]: I1209 14:47:51.960134 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efd509c2-c0ac-450e-84d3-14e9e8935f1c" (UID: "efd509c2-c0ac-450e-84d3-14e9e8935f1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.013028 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.013081 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-577pq\" (UniqueName: \"kubernetes.io/projected/efd509c2-c0ac-450e-84d3-14e9e8935f1c-kube-api-access-577pq\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.013097 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/efd509c2-c0ac-450e-84d3-14e9e8935f1c-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.229646 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.427757 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bwk2j" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.427720 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bwk2j" event={"ID":"efd509c2-c0ac-450e-84d3-14e9e8935f1c","Type":"ContainerDied","Data":"095a8269502231109925ae3a8b9fe7ba78918b935e25e3f11a22782be92df643"} Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.428205 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="095a8269502231109925ae3a8b9fe7ba78918b935e25e3f11a22782be92df643" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.431655 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f628655f-ba3b-450b-8426-a5acfabd2759","Type":"ContainerStarted","Data":"c60512b3560f588f98178612636d378b976d61e05636a7623ae8cc6dfc86f5bf"} Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.431694 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f628655f-ba3b-450b-8426-a5acfabd2759","Type":"ContainerStarted","Data":"a9f41010ad0a5f7b8562bdacb6bd326163cfc44d091bd916e5d32708c77f0a78"} Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.447684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91d4d708-1c18-4827-92eb-349b5eaf6d2f","Type":"ContainerStarted","Data":"437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b"} Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.481847 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.481830099 podStartE2EDuration="3.481830099s" podCreationTimestamp="2025-12-09 14:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:52.468499642 +0000 UTC m=+1504.364701778" watchObservedRunningTime="2025-12-09 14:47:52.481830099 +0000 UTC m=+1504.378032235" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.674317 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af45895-9497-4517-a8d8-56a64510ac72" path="/var/lib/kubelet/pods/7af45895-9497-4517-a8d8-56a64510ac72/volumes" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.689557 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qjzbv"] Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.689803 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" podUID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" containerName="dnsmasq-dns" containerID="cri-o://29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc" gracePeriod=10 Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.722049 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l8bkm"] Dec 09 14:47:52 crc kubenswrapper[4770]: E1209 14:47:52.722945 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd509c2-c0ac-450e-84d3-14e9e8935f1c" containerName="neutron-db-sync" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.723060 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd509c2-c0ac-450e-84d3-14e9e8935f1c" containerName="neutron-db-sync" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.725109 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd509c2-c0ac-450e-84d3-14e9e8935f1c" containerName="neutron-db-sync" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.732093 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.747052 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l8bkm"] Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.817486 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cb5d9f9bd-qssdr"] Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.819476 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.824854 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.825191 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.826332 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.826685 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pfjpt" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.831272 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.831313 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-svc\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.831368 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.831434 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-config\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.831470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286rd\" (UniqueName: \"kubernetes.io/projected/ed270752-0ddd-4eff-a26a-cc14184e2898-kube-api-access-286rd\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.831503 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.842627 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.860811 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cb5d9f9bd-qssdr"] Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-ovndb-tls-certs\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933241 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933273 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-svc\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933304 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933354 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-httpd-config\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933397 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-config\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933425 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-combined-ca-bundle\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933457 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286rd\" (UniqueName: \"kubernetes.io/projected/ed270752-0ddd-4eff-a26a-cc14184e2898-kube-api-access-286rd\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933484 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-config\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933525 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.933616 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7bk5\" (UniqueName: \"kubernetes.io/projected/ba146650-9074-423d-aa8f-9cded3a49030-kube-api-access-h7bk5\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.935150 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.937340 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.938342 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-config\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.939341 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.939392 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-svc\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:52 crc kubenswrapper[4770]: I1209 14:47:52.955838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286rd\" (UniqueName: \"kubernetes.io/projected/ed270752-0ddd-4eff-a26a-cc14184e2898-kube-api-access-286rd\") pod \"dnsmasq-dns-6b7b667979-l8bkm\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.035701 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7bk5\" (UniqueName: \"kubernetes.io/projected/ba146650-9074-423d-aa8f-9cded3a49030-kube-api-access-h7bk5\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.035801 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-ovndb-tls-certs\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.035919 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-httpd-config\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.035980 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-combined-ca-bundle\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.036016 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-config\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.040436 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-httpd-config\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.042945 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-combined-ca-bundle\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.046988 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-ovndb-tls-certs\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.055633 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7bk5\" (UniqueName: \"kubernetes.io/projected/ba146650-9074-423d-aa8f-9cded3a49030-kube-api-access-h7bk5\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.056180 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-config\") pod \"neutron-6cb5d9f9bd-qssdr\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.161549 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.185353 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.368278 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.490927 4770 generic.go:334] "Generic (PLEG): container finished" podID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" containerID="29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc" exitCode=0 Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.491442 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.491802 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" event={"ID":"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf","Type":"ContainerDied","Data":"29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc"} Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.491863 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qjzbv" event={"ID":"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf","Type":"ContainerDied","Data":"8f7d61c5b01715f7959e1b493be01159a7f56fc0937d76d14a0a914cbcfe8d92"} Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.491882 4770 scope.go:117] "RemoveContainer" containerID="29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.502081 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f628655f-ba3b-450b-8426-a5acfabd2759","Type":"ContainerStarted","Data":"530fb3e85f04971560ba128056414898ddab3197b8a20779ebae00a282a940e3"} Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.505216 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerStarted","Data":"b400cce9cb794cb4f7b48b6c0efac5cce823ef848c31edf7e33f92602a0d148c"} Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.535195 4770 scope.go:117] "RemoveContainer" containerID="57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.551658 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.551614958 podStartE2EDuration="3.551614958s" podCreationTimestamp="2025-12-09 14:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:53.53037746 +0000 UTC m=+1505.426579596" watchObservedRunningTime="2025-12-09 14:47:53.551614958 +0000 UTC m=+1505.447817094" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.572450 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-nb\") pod \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.572562 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-config\") pod \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.572641 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-swift-storage-0\") pod \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.572667 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-sb\") pod \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.572787 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-svc\") pod \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.572925 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdt49\" (UniqueName: \"kubernetes.io/projected/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-kube-api-access-jdt49\") pod \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\" (UID: \"e8d79aac-9bb5-48a9-b9e0-2650f9285aaf\") " Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.584891 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-kube-api-access-jdt49" (OuterVolumeSpecName: "kube-api-access-jdt49") pod "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" (UID: "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf"). InnerVolumeSpecName "kube-api-access-jdt49". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.615298 4770 scope.go:117] "RemoveContainer" containerID="29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc" Dec 09 14:47:53 crc kubenswrapper[4770]: E1209 14:47:53.618299 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc\": container with ID starting with 29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc not found: ID does not exist" containerID="29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.618343 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc"} err="failed to get container status \"29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc\": rpc error: code = NotFound desc = could not find container \"29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc\": container with ID starting with 29250dc1760d0bf5d1e9ef033fbabb008dc719ca59b743101f3b1a1e3a3702bc not found: ID does not exist" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.618375 4770 scope.go:117] "RemoveContainer" containerID="57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339" Dec 09 14:47:53 crc kubenswrapper[4770]: E1209 14:47:53.641680 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339\": container with ID starting with 57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339 not found: ID does not exist" containerID="57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.641819 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339"} err="failed to get container status \"57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339\": rpc error: code = NotFound desc = could not find container \"57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339\": container with ID starting with 57b124fdcdb9e83da05791090c421495f3c60d3b95a9520c8624d2be11779339 not found: ID does not exist" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.675792 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdt49\" (UniqueName: \"kubernetes.io/projected/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-kube-api-access-jdt49\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.692890 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" (UID: "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.728818 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" (UID: "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.729463 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-config" (OuterVolumeSpecName: "config") pod "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" (UID: "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.739030 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" (UID: "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.739501 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" (UID: "e8d79aac-9bb5-48a9-b9e0-2650f9285aaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.768952 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l8bkm"] Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.782622 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.784678 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.784701 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.784714 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.784739 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.847252 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qjzbv"] Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.862302 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qjzbv"] Dec 09 14:47:53 crc kubenswrapper[4770]: I1209 14:47:53.948574 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cb5d9f9bd-qssdr"] Dec 09 14:47:54 crc kubenswrapper[4770]: I1209 14:47:54.517322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" event={"ID":"ed270752-0ddd-4eff-a26a-cc14184e2898","Type":"ContainerStarted","Data":"90aee21e6dc30c597b1c25dcab0f2c622c3963c2e56c7f476686ac6fb5e8ffd4"} Dec 09 14:47:54 crc kubenswrapper[4770]: I1209 14:47:54.601373 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" path="/var/lib/kubelet/pods/e8d79aac-9bb5-48a9-b9e0-2650f9285aaf/volumes" Dec 09 14:47:54 crc kubenswrapper[4770]: W1209 14:47:54.815904 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba146650_9074_423d_aa8f_9cded3a49030.slice/crio-ee2fec114e2e0140435fe3218cfdc4c51458015d52b612d28ef4eb68043d4642 WatchSource:0}: Error finding container ee2fec114e2e0140435fe3218cfdc4c51458015d52b612d28ef4eb68043d4642: Status 404 returned error can't find the container with id ee2fec114e2e0140435fe3218cfdc4c51458015d52b612d28ef4eb68043d4642 Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.025811 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fcb9dd595-tqt29"] Dec 09 14:47:55 crc kubenswrapper[4770]: E1209 14:47:55.026271 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" containerName="init" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.026306 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" containerName="init" Dec 09 14:47:55 crc kubenswrapper[4770]: E1209 14:47:55.026334 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" containerName="dnsmasq-dns" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.026340 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" containerName="dnsmasq-dns" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.026558 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d79aac-9bb5-48a9-b9e0-2650f9285aaf" containerName="dnsmasq-dns" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.027606 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.038855 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fcb9dd595-tqt29"] Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.038871 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.039247 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.115775 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-public-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.115839 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-config\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.115869 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-httpd-config\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.116142 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxh57\" (UniqueName: \"kubernetes.io/projected/678619ae-5986-49ce-b307-53661c4f94f9-kube-api-access-qxh57\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.116192 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-ovndb-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.116240 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-internal-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.116275 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-combined-ca-bundle\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.219270 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxh57\" (UniqueName: \"kubernetes.io/projected/678619ae-5986-49ce-b307-53661c4f94f9-kube-api-access-qxh57\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.219670 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-ovndb-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.219781 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-internal-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.219838 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-combined-ca-bundle\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.219932 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-public-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.220014 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-config\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.220044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-httpd-config\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.225966 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-internal-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.226750 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-ovndb-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.227044 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-httpd-config\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.227281 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-public-tls-certs\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.230289 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-config\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.237827 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxh57\" (UniqueName: \"kubernetes.io/projected/678619ae-5986-49ce-b307-53661c4f94f9-kube-api-access-qxh57\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.240216 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678619ae-5986-49ce-b307-53661c4f94f9-combined-ca-bundle\") pod \"neutron-6fcb9dd595-tqt29\" (UID: \"678619ae-5986-49ce-b307-53661c4f94f9\") " pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.434753 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.530374 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-b79f96648-bcj77" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.605950 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerStarted","Data":"6fe8597e6f65ed70f8767c2fcea27020bf43d3fc2022e12366552013500f81ba"} Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.627431 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb5d9f9bd-qssdr" event={"ID":"ba146650-9074-423d-aa8f-9cded3a49030","Type":"ContainerStarted","Data":"329e6d43c176fdcd2138c8fc42c34bbe2d96bae5abcfab163ed371ccc8f5d9c4"} Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.627472 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb5d9f9bd-qssdr" event={"ID":"ba146650-9074-423d-aa8f-9cded3a49030","Type":"ContainerStarted","Data":"ee2fec114e2e0140435fe3218cfdc4c51458015d52b612d28ef4eb68043d4642"} Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.627836 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.633991 4770 generic.go:334] "Generic (PLEG): container finished" podID="ed270752-0ddd-4eff-a26a-cc14184e2898" containerID="11019bf1655511b12dd59a97f6e65440e090e7d569bf875bb764f78fbdbb91d0" exitCode=0 Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.634040 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" event={"ID":"ed270752-0ddd-4eff-a26a-cc14184e2898","Type":"ContainerDied","Data":"11019bf1655511b12dd59a97f6e65440e090e7d569bf875bb764f78fbdbb91d0"} Dec 09 14:47:55 crc kubenswrapper[4770]: I1209 14:47:55.710399 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cb5d9f9bd-qssdr" podStartSLOduration=3.710376187 podStartE2EDuration="3.710376187s" podCreationTimestamp="2025-12-09 14:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:55.66517741 +0000 UTC m=+1507.561379546" watchObservedRunningTime="2025-12-09 14:47:55.710376187 +0000 UTC m=+1507.606578323" Dec 09 14:47:56 crc kubenswrapper[4770]: I1209 14:47:56.135176 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fcb9dd595-tqt29"] Dec 09 14:47:56 crc kubenswrapper[4770]: I1209 14:47:56.671369 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb5d9f9bd-qssdr" event={"ID":"ba146650-9074-423d-aa8f-9cded3a49030","Type":"ContainerStarted","Data":"5d8674029ae744bd48620e8d7354335e652bcddaf85cf30b436b0516857a1007"} Dec 09 14:47:56 crc kubenswrapper[4770]: I1209 14:47:56.674111 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcb9dd595-tqt29" event={"ID":"678619ae-5986-49ce-b307-53661c4f94f9","Type":"ContainerStarted","Data":"dfc0369af645fdb6a631bb2197afd97deb8452ba3478bbc79ac193b9acc462a7"} Dec 09 14:47:56 crc kubenswrapper[4770]: I1209 14:47:56.674170 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcb9dd595-tqt29" event={"ID":"678619ae-5986-49ce-b307-53661c4f94f9","Type":"ContainerStarted","Data":"7e44036cb90a7b573ca2ab91e4bcab986b50bbad95cb6d40b0290c2935a423e5"} Dec 09 14:47:56 crc kubenswrapper[4770]: I1209 14:47:56.676691 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" event={"ID":"ed270752-0ddd-4eff-a26a-cc14184e2898","Type":"ContainerStarted","Data":"fda2c4502b17d79707fdb59d6843569c367746d964e48b8b6a3ba486b99aedaf"} Dec 09 14:47:56 crc kubenswrapper[4770]: I1209 14:47:56.676777 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:47:56 crc kubenswrapper[4770]: I1209 14:47:56.678658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerStarted","Data":"34fc267bfba99ef32416451aa3d522773a9d5dec27c46652d45340442c080ae1"} Dec 09 14:47:57 crc kubenswrapper[4770]: I1209 14:47:57.690814 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcb9dd595-tqt29" event={"ID":"678619ae-5986-49ce-b307-53661c4f94f9","Type":"ContainerStarted","Data":"40791cdf95bc18427752073ab235f84a05c795642396ee3afc56c7a71254d3e2"} Dec 09 14:47:57 crc kubenswrapper[4770]: I1209 14:47:57.691254 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:47:57 crc kubenswrapper[4770]: I1209 14:47:57.692836 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerStarted","Data":"77a66348c27c97c5e05499508540075e8b6200af2dbc9f699dc87d60de83fdcd"} Dec 09 14:47:57 crc kubenswrapper[4770]: I1209 14:47:57.695292 4770 generic.go:334] "Generic (PLEG): container finished" podID="017d5a4f-99ba-4d3f-9053-207e6f414ab1" containerID="bb6c72299af868fc9e19c08ee82ba82f5f78117c5f167dfe7a3996fb401c3ff9" exitCode=0 Dec 09 14:47:57 crc kubenswrapper[4770]: I1209 14:47:57.695349 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-k7gz4" event={"ID":"017d5a4f-99ba-4d3f-9053-207e6f414ab1","Type":"ContainerDied","Data":"bb6c72299af868fc9e19c08ee82ba82f5f78117c5f167dfe7a3996fb401c3ff9"} Dec 09 14:47:57 crc kubenswrapper[4770]: I1209 14:47:57.709144 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fcb9dd595-tqt29" podStartSLOduration=2.709121234 podStartE2EDuration="2.709121234s" podCreationTimestamp="2025-12-09 14:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:57.707122591 +0000 UTC m=+1509.603324727" watchObservedRunningTime="2025-12-09 14:47:57.709121234 +0000 UTC m=+1509.605323380" Dec 09 14:47:57 crc kubenswrapper[4770]: I1209 14:47:57.713943 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" podStartSLOduration=5.713920422 podStartE2EDuration="5.713920422s" podCreationTimestamp="2025-12-09 14:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:47:56.70730668 +0000 UTC m=+1508.603508816" watchObservedRunningTime="2025-12-09 14:47:57.713920422 +0000 UTC m=+1509.610122578" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.786389 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.788288 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.791904 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.792515 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-n28ph" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.792785 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.810007 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.892551 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrfvc\" (UniqueName: \"kubernetes.io/projected/e82b342d-8682-4892-836b-6248fcea0d3f-kube-api-access-mrfvc\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.892599 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e82b342d-8682-4892-836b-6248fcea0d3f-openstack-config-secret\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.892722 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e82b342d-8682-4892-836b-6248fcea0d3f-openstack-config\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.893542 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82b342d-8682-4892-836b-6248fcea0d3f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.995955 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrfvc\" (UniqueName: \"kubernetes.io/projected/e82b342d-8682-4892-836b-6248fcea0d3f-kube-api-access-mrfvc\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.995993 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e82b342d-8682-4892-836b-6248fcea0d3f-openstack-config-secret\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.996089 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e82b342d-8682-4892-836b-6248fcea0d3f-openstack-config\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.996122 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82b342d-8682-4892-836b-6248fcea0d3f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:58 crc kubenswrapper[4770]: I1209 14:47:58.997657 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e82b342d-8682-4892-836b-6248fcea0d3f-openstack-config\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.005559 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e82b342d-8682-4892-836b-6248fcea0d3f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.016179 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrfvc\" (UniqueName: \"kubernetes.io/projected/e82b342d-8682-4892-836b-6248fcea0d3f-kube-api-access-mrfvc\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.018439 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e82b342d-8682-4892-836b-6248fcea0d3f-openstack-config-secret\") pod \"openstackclient\" (UID: \"e82b342d-8682-4892-836b-6248fcea0d3f\") " pod="openstack/openstackclient" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.139650 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.266116 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.412998 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-db-sync-config-data\") pod \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.413071 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb2cj\" (UniqueName: \"kubernetes.io/projected/017d5a4f-99ba-4d3f-9053-207e6f414ab1-kube-api-access-pb2cj\") pod \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.413138 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-combined-ca-bundle\") pod \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\" (UID: \"017d5a4f-99ba-4d3f-9053-207e6f414ab1\") " Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.428543 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "017d5a4f-99ba-4d3f-9053-207e6f414ab1" (UID: "017d5a4f-99ba-4d3f-9053-207e6f414ab1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.429135 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017d5a4f-99ba-4d3f-9053-207e6f414ab1-kube-api-access-pb2cj" (OuterVolumeSpecName: "kube-api-access-pb2cj") pod "017d5a4f-99ba-4d3f-9053-207e6f414ab1" (UID: "017d5a4f-99ba-4d3f-9053-207e6f414ab1"). InnerVolumeSpecName "kube-api-access-pb2cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.459848 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "017d5a4f-99ba-4d3f-9053-207e6f414ab1" (UID: "017d5a4f-99ba-4d3f-9053-207e6f414ab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.529218 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.529265 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb2cj\" (UniqueName: \"kubernetes.io/projected/017d5a4f-99ba-4d3f-9053-207e6f414ab1-kube-api-access-pb2cj\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.529276 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017d5a4f-99ba-4d3f-9053-207e6f414ab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.642752 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 14:47:59 crc kubenswrapper[4770]: W1209 14:47:59.647870 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode82b342d_8682_4892_836b_6248fcea0d3f.slice/crio-5b6486c4cadeab81e14b0b017037714d8435d3e1c614d427431986c34c592a61 WatchSource:0}: Error finding container 5b6486c4cadeab81e14b0b017037714d8435d3e1c614d427431986c34c592a61: Status 404 returned error can't find the container with id 5b6486c4cadeab81e14b0b017037714d8435d3e1c614d427431986c34c592a61 Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.717979 4770 generic.go:334] "Generic (PLEG): container finished" podID="fbfdafc8-508d-4eec-9496-7058a6d1d49b" containerID="d64c4f4bca7327a10af13f66da55bb6ae201727cf953df708f837e0c525fe72f" exitCode=0 Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.718118 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-9vv8k" event={"ID":"fbfdafc8-508d-4eec-9496-7058a6d1d49b","Type":"ContainerDied","Data":"d64c4f4bca7327a10af13f66da55bb6ae201727cf953df708f837e0c525fe72f"} Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.720709 4770 generic.go:334] "Generic (PLEG): container finished" podID="b7669f5b-7406-4ef5-833b-f69821551b08" containerID="c3f5c3aff4d7cd88503269c521e5f346c17559e69d1db9d9b3d4514c0aaf414b" exitCode=0 Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.720760 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dpmnb" event={"ID":"b7669f5b-7406-4ef5-833b-f69821551b08","Type":"ContainerDied","Data":"c3f5c3aff4d7cd88503269c521e5f346c17559e69d1db9d9b3d4514c0aaf414b"} Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.723826 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerStarted","Data":"936ff86e6b615d46286eb51e9f6b98e02a8db70f6d4a09402c6c2340043b1558"} Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.723951 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.726330 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e82b342d-8682-4892-836b-6248fcea0d3f","Type":"ContainerStarted","Data":"5b6486c4cadeab81e14b0b017037714d8435d3e1c614d427431986c34c592a61"} Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.729292 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-k7gz4" event={"ID":"017d5a4f-99ba-4d3f-9053-207e6f414ab1","Type":"ContainerDied","Data":"cf773d8cb26acd09a313f82ae248c4162e0bee5741784f49c79f4c7e00bd9cb6"} Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.729395 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf773d8cb26acd09a313f82ae248c4162e0bee5741784f49c79f4c7e00bd9cb6" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.729437 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-k7gz4" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.792075 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.789566466 podStartE2EDuration="8.792057859s" podCreationTimestamp="2025-12-09 14:47:51 +0000 UTC" firstStartedPulling="2025-12-09 14:47:52.821073778 +0000 UTC m=+1504.717275914" lastFinishedPulling="2025-12-09 14:47:57.823565171 +0000 UTC m=+1509.719767307" observedRunningTime="2025-12-09 14:47:59.787372705 +0000 UTC m=+1511.683574841" watchObservedRunningTime="2025-12-09 14:47:59.792057859 +0000 UTC m=+1511.688259995" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.866617 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.866668 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.913612 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d6894fb8f-gdsw2"] Dec 09 14:47:59 crc kubenswrapper[4770]: E1209 14:47:59.914246 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017d5a4f-99ba-4d3f-9053-207e6f414ab1" containerName="barbican-db-sync" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.914321 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="017d5a4f-99ba-4d3f-9053-207e6f414ab1" containerName="barbican-db-sync" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.914602 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="017d5a4f-99ba-4d3f-9053-207e6f414ab1" containerName="barbican-db-sync" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.915741 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.919164 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sbdj5" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.919500 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.921115 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.938747 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-config-data-custom\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.939152 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-combined-ca-bundle\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.939201 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2nzf\" (UniqueName: \"kubernetes.io/projected/0c36a704-9c2f-4761-80b3-45215f34c1f6-kube-api-access-r2nzf\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.939268 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c36a704-9c2f-4761-80b3-45215f34c1f6-logs\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.939309 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-config-data\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.946569 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8466dc55b6-5ppcl"] Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.953955 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.968233 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.979323 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d6894fb8f-gdsw2"] Dec 09 14:47:59 crc kubenswrapper[4770]: I1209 14:47:59.998834 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8466dc55b6-5ppcl"] Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.040951 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-combined-ca-bundle\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.041210 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2nzf\" (UniqueName: \"kubernetes.io/projected/0c36a704-9c2f-4761-80b3-45215f34c1f6-kube-api-access-r2nzf\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.041331 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c36a704-9c2f-4761-80b3-45215f34c1f6-logs\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.041469 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-config-data\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.041623 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-config-data-custom\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.048255 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-config-data\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.048659 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c36a704-9c2f-4761-80b3-45215f34c1f6-logs\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.049918 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-combined-ca-bundle\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.054377 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c36a704-9c2f-4761-80b3-45215f34c1f6-config-data-custom\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.067380 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.089959 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2nzf\" (UniqueName: \"kubernetes.io/projected/0c36a704-9c2f-4761-80b3-45215f34c1f6-kube-api-access-r2nzf\") pod \"barbican-worker-d6894fb8f-gdsw2\" (UID: \"0c36a704-9c2f-4761-80b3-45215f34c1f6\") " pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.144845 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-combined-ca-bundle\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.144946 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2242028-0a76-456d-b92c-28ccda87972d-logs\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.144981 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-config-data\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.145020 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj7xx\" (UniqueName: \"kubernetes.io/projected/a2242028-0a76-456d-b92c-28ccda87972d-kube-api-access-vj7xx\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.145050 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-config-data-custom\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.218854 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l8bkm"] Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.219161 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" podUID="ed270752-0ddd-4eff-a26a-cc14184e2898" containerName="dnsmasq-dns" containerID="cri-o://fda2c4502b17d79707fdb59d6843569c367746d964e48b8b6a3ba486b99aedaf" gracePeriod=10 Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.230914 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.255044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj7xx\" (UniqueName: \"kubernetes.io/projected/a2242028-0a76-456d-b92c-28ccda87972d-kube-api-access-vj7xx\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.255127 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-config-data-custom\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.255283 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-combined-ca-bundle\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.255376 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2242028-0a76-456d-b92c-28ccda87972d-logs\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.255440 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-config-data\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.259215 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2242028-0a76-456d-b92c-28ccda87972d-logs\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.261209 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6894fb8f-gdsw2" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.264609 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-config-data-custom\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.287502 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-config-data\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.287996 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2242028-0a76-456d-b92c-28ccda87972d-combined-ca-bundle\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.303376 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj7xx\" (UniqueName: \"kubernetes.io/projected/a2242028-0a76-456d-b92c-28ccda87972d-kube-api-access-vj7xx\") pod \"barbican-keystone-listener-8466dc55b6-5ppcl\" (UID: \"a2242028-0a76-456d-b92c-28ccda87972d\") " pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.317356 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.317774 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cpb4z"] Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.334513 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.371865 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cpb4z"] Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.472165 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.472264 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.472344 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-config\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.472448 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.472471 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.472501 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdlm5\" (UniqueName: \"kubernetes.io/projected/c72f6fb5-b180-4bf5-9b26-ac0608313b83-kube-api-access-fdlm5\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.518659 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.575009 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-config\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.575153 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.575181 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.575213 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlm5\" (UniqueName: \"kubernetes.io/projected/c72f6fb5-b180-4bf5-9b26-ac0608313b83-kube-api-access-fdlm5\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.575264 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.575364 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.576589 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.576588 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.576836 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-config\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.577545 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.584933 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.590193 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d494cf4b-vwwg5"] Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.598270 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.605097 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.618489 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlm5\" (UniqueName: \"kubernetes.io/projected/c72f6fb5-b180-4bf5-9b26-ac0608313b83-kube-api-access-fdlm5\") pod \"dnsmasq-dns-848cf88cfc-cpb4z\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.678409 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-combined-ca-bundle\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.678517 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlq8d\" (UniqueName: \"kubernetes.io/projected/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-kube-api-access-tlq8d\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.678585 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-logs\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.678696 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data-custom\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.678739 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.684550 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d494cf4b-vwwg5"] Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.756101 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.797628 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data-custom\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.797690 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.797753 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-combined-ca-bundle\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.797839 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlq8d\" (UniqueName: \"kubernetes.io/projected/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-kube-api-access-tlq8d\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.797917 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-logs\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.800363 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-logs\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.806214 4770 generic.go:334] "Generic (PLEG): container finished" podID="ed270752-0ddd-4eff-a26a-cc14184e2898" containerID="fda2c4502b17d79707fdb59d6843569c367746d964e48b8b6a3ba486b99aedaf" exitCode=0 Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.806275 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.806370 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" event={"ID":"ed270752-0ddd-4eff-a26a-cc14184e2898","Type":"ContainerDied","Data":"fda2c4502b17d79707fdb59d6843569c367746d964e48b8b6a3ba486b99aedaf"} Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.806484 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-combined-ca-bundle\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.817389 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data-custom\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.817641 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.817760 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.869910 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlq8d\" (UniqueName: \"kubernetes.io/projected/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-kube-api-access-tlq8d\") pod \"barbican-api-7d494cf4b-vwwg5\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.923044 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.923185 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:00 crc kubenswrapper[4770]: I1209 14:48:00.974327 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.013305 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.060997 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.287952 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.411857 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-nb\") pod \"ed270752-0ddd-4eff-a26a-cc14184e2898\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.411965 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-sb\") pod \"ed270752-0ddd-4eff-a26a-cc14184e2898\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.411983 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-286rd\" (UniqueName: \"kubernetes.io/projected/ed270752-0ddd-4eff-a26a-cc14184e2898-kube-api-access-286rd\") pod \"ed270752-0ddd-4eff-a26a-cc14184e2898\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.412033 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-swift-storage-0\") pod \"ed270752-0ddd-4eff-a26a-cc14184e2898\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.412054 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-svc\") pod \"ed270752-0ddd-4eff-a26a-cc14184e2898\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.412122 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-config\") pod \"ed270752-0ddd-4eff-a26a-cc14184e2898\" (UID: \"ed270752-0ddd-4eff-a26a-cc14184e2898\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.436613 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed270752-0ddd-4eff-a26a-cc14184e2898-kube-api-access-286rd" (OuterVolumeSpecName: "kube-api-access-286rd") pod "ed270752-0ddd-4eff-a26a-cc14184e2898" (UID: "ed270752-0ddd-4eff-a26a-cc14184e2898"). InnerVolumeSpecName "kube-api-access-286rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.517370 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-286rd\" (UniqueName: \"kubernetes.io/projected/ed270752-0ddd-4eff-a26a-cc14184e2898-kube-api-access-286rd\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.526580 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-config" (OuterVolumeSpecName: "config") pod "ed270752-0ddd-4eff-a26a-cc14184e2898" (UID: "ed270752-0ddd-4eff-a26a-cc14184e2898"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.536948 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed270752-0ddd-4eff-a26a-cc14184e2898" (UID: "ed270752-0ddd-4eff-a26a-cc14184e2898"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.540954 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed270752-0ddd-4eff-a26a-cc14184e2898" (UID: "ed270752-0ddd-4eff-a26a-cc14184e2898"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.544525 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed270752-0ddd-4eff-a26a-cc14184e2898" (UID: "ed270752-0ddd-4eff-a26a-cc14184e2898"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.548685 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed270752-0ddd-4eff-a26a-cc14184e2898" (UID: "ed270752-0ddd-4eff-a26a-cc14184e2898"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.619227 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.619272 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.619287 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.619298 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.619311 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed270752-0ddd-4eff-a26a-cc14184e2898-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.681423 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.721963 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8466dc55b6-5ppcl"] Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.741331 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.815788 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d6894fb8f-gdsw2"] Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.827828 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-scripts\") pod \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.827907 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg6jq\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-kube-api-access-kg6jq\") pod \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.827950 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-combined-ca-bundle\") pod \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.827987 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-config-data\") pod \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.828013 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-certs\") pod \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\" (UID: \"fbfdafc8-508d-4eec-9496-7058a6d1d49b\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.828220 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg8n8\" (UniqueName: \"kubernetes.io/projected/b7669f5b-7406-4ef5-833b-f69821551b08-kube-api-access-tg8n8\") pod \"b7669f5b-7406-4ef5-833b-f69821551b08\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.830107 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" event={"ID":"a2242028-0a76-456d-b92c-28ccda87972d","Type":"ContainerStarted","Data":"326f9a9ccadcdb8d518ce06090c2ce2e533a0ba8d3797e2228ab3ae0e55bfd0f"} Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.832345 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7669f5b-7406-4ef5-833b-f69821551b08-etc-machine-id\") pod \"b7669f5b-7406-4ef5-833b-f69821551b08\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.832878 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7669f5b-7406-4ef5-833b-f69821551b08-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b7669f5b-7406-4ef5-833b-f69821551b08" (UID: "b7669f5b-7406-4ef5-833b-f69821551b08"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.832919 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-db-sync-config-data\") pod \"b7669f5b-7406-4ef5-833b-f69821551b08\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.833037 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-combined-ca-bundle\") pod \"b7669f5b-7406-4ef5-833b-f69821551b08\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.846339 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7669f5b-7406-4ef5-833b-f69821551b08-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: W1209 14:48:01.847204 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c36a704_9c2f_4761_80b3_45215f34c1f6.slice/crio-8084dfacfc55ae796ec4164a63be79df867e883ca876627db2c4faefad2056a3 WatchSource:0}: Error finding container 8084dfacfc55ae796ec4164a63be79df867e883ca876627db2c4faefad2056a3: Status 404 returned error can't find the container with id 8084dfacfc55ae796ec4164a63be79df867e883ca876627db2c4faefad2056a3 Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.850579 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dpmnb" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.850855 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dpmnb" event={"ID":"b7669f5b-7406-4ef5-833b-f69821551b08","Type":"ContainerDied","Data":"bb208201158a7402fc3bee9c1aa241b7aa41de924ed7a02b1800aa9eb449e229"} Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.850917 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb208201158a7402fc3bee9c1aa241b7aa41de924ed7a02b1800aa9eb449e229" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.853894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-kube-api-access-kg6jq" (OuterVolumeSpecName: "kube-api-access-kg6jq") pod "fbfdafc8-508d-4eec-9496-7058a6d1d49b" (UID: "fbfdafc8-508d-4eec-9496-7058a6d1d49b"). InnerVolumeSpecName "kube-api-access-kg6jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.853966 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7669f5b-7406-4ef5-833b-f69821551b08-kube-api-access-tg8n8" (OuterVolumeSpecName: "kube-api-access-tg8n8") pod "b7669f5b-7406-4ef5-833b-f69821551b08" (UID: "b7669f5b-7406-4ef5-833b-f69821551b08"). InnerVolumeSpecName "kube-api-access-tg8n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.858443 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b7669f5b-7406-4ef5-833b-f69821551b08" (UID: "b7669f5b-7406-4ef5-833b-f69821551b08"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.860181 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" event={"ID":"ed270752-0ddd-4eff-a26a-cc14184e2898","Type":"ContainerDied","Data":"90aee21e6dc30c597b1c25dcab0f2c622c3963c2e56c7f476686ac6fb5e8ffd4"} Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.860237 4770 scope.go:117] "RemoveContainer" containerID="fda2c4502b17d79707fdb59d6843569c367746d964e48b8b6a3ba486b99aedaf" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.860376 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-l8bkm" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.866156 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-certs" (OuterVolumeSpecName: "certs") pod "fbfdafc8-508d-4eec-9496-7058a6d1d49b" (UID: "fbfdafc8-508d-4eec-9496-7058a6d1d49b"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.871329 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-scripts" (OuterVolumeSpecName: "scripts") pod "fbfdafc8-508d-4eec-9496-7058a6d1d49b" (UID: "fbfdafc8-508d-4eec-9496-7058a6d1d49b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.889235 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-config-data" (OuterVolumeSpecName: "config-data") pod "fbfdafc8-508d-4eec-9496-7058a6d1d49b" (UID: "fbfdafc8-508d-4eec-9496-7058a6d1d49b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.889376 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-9vv8k" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.890526 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-9vv8k" event={"ID":"fbfdafc8-508d-4eec-9496-7058a6d1d49b","Type":"ContainerDied","Data":"5b17c24b9554d45e8168b537e6b5949c8ef487fe02af25246ebca0ee6f6e0e44"} Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.890554 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b17c24b9554d45e8168b537e6b5949c8ef487fe02af25246ebca0ee6f6e0e44" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.890577 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.890960 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.918066 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l8bkm"] Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.935093 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbfdafc8-508d-4eec-9496-7058a6d1d49b" (UID: "fbfdafc8-508d-4eec-9496-7058a6d1d49b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.942575 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l8bkm"] Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.944300 4770 scope.go:117] "RemoveContainer" containerID="11019bf1655511b12dd59a97f6e65440e090e7d569bf875bb764f78fbdbb91d0" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.949425 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-config-data\") pod \"b7669f5b-7406-4ef5-833b-f69821551b08\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.949491 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-scripts\") pod \"b7669f5b-7406-4ef5-833b-f69821551b08\" (UID: \"b7669f5b-7406-4ef5-833b-f69821551b08\") " Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.959377 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.959434 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg6jq\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-kube-api-access-kg6jq\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.959464 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.959486 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfdafc8-508d-4eec-9496-7058a6d1d49b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.959505 4770 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fbfdafc8-508d-4eec-9496-7058a6d1d49b-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.959526 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg8n8\" (UniqueName: \"kubernetes.io/projected/b7669f5b-7406-4ef5-833b-f69821551b08-kube-api-access-tg8n8\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.959546 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.963471 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7669f5b-7406-4ef5-833b-f69821551b08" (UID: "b7669f5b-7406-4ef5-833b-f69821551b08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.974790 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-2k6db"] Dec 09 14:48:01 crc kubenswrapper[4770]: E1209 14:48:01.975218 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed270752-0ddd-4eff-a26a-cc14184e2898" containerName="dnsmasq-dns" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.975232 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed270752-0ddd-4eff-a26a-cc14184e2898" containerName="dnsmasq-dns" Dec 09 14:48:01 crc kubenswrapper[4770]: E1209 14:48:01.975250 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7669f5b-7406-4ef5-833b-f69821551b08" containerName="cinder-db-sync" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.975258 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7669f5b-7406-4ef5-833b-f69821551b08" containerName="cinder-db-sync" Dec 09 14:48:01 crc kubenswrapper[4770]: E1209 14:48:01.975285 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed270752-0ddd-4eff-a26a-cc14184e2898" containerName="init" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.975293 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed270752-0ddd-4eff-a26a-cc14184e2898" containerName="init" Dec 09 14:48:01 crc kubenswrapper[4770]: E1209 14:48:01.975316 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfdafc8-508d-4eec-9496-7058a6d1d49b" containerName="cloudkitty-db-sync" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.975323 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfdafc8-508d-4eec-9496-7058a6d1d49b" containerName="cloudkitty-db-sync" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.975572 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed270752-0ddd-4eff-a26a-cc14184e2898" containerName="dnsmasq-dns" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.975600 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7669f5b-7406-4ef5-833b-f69821551b08" containerName="cinder-db-sync" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.975610 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfdafc8-508d-4eec-9496-7058a6d1d49b" containerName="cloudkitty-db-sync" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.976503 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.988060 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-2k6db"] Dec 09 14:48:01 crc kubenswrapper[4770]: I1209 14:48:01.988438 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-scripts" (OuterVolumeSpecName: "scripts") pod "b7669f5b-7406-4ef5-833b-f69821551b08" (UID: "b7669f5b-7406-4ef5-833b-f69821551b08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.064525 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.064559 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.089370 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-config-data" (OuterVolumeSpecName: "config-data") pod "b7669f5b-7406-4ef5-833b-f69821551b08" (UID: "b7669f5b-7406-4ef5-833b-f69821551b08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.133561 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.135600 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.140884 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.158594 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.166123 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-scripts\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.166327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-certs\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.166417 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-config-data\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.166517 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9lp\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-kube-api-access-xv9lp\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.207750 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-combined-ca-bundle\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.208986 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7669f5b-7406-4ef5-833b-f69821551b08-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.220121 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cpb4z"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.281610 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-nnv99"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.287605 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.299842 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-nnv99"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.315935 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-svc\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316047 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpgr2\" (UniqueName: \"kubernetes.io/projected/805f003b-0500-42d6-9516-373ca8ec2c6a-kube-api-access-rpgr2\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316124 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316186 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dtzw\" (UniqueName: \"kubernetes.io/projected/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-kube-api-access-7dtzw\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316239 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-combined-ca-bundle\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316269 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316325 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316394 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316573 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-config\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316617 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316682 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-scripts\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316722 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-certs\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316769 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-config-data\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316805 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316823 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv9lp\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-kube-api-access-xv9lp\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316857 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.316885 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/805f003b-0500-42d6-9516-373ca8ec2c6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.322919 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cpb4z"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.329358 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-config-data\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.331009 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-combined-ca-bundle\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.336239 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-scripts\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.336673 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-certs\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.353476 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d494cf4b-vwwg5"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.396394 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv9lp\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-kube-api-access-xv9lp\") pod \"cloudkitty-storageinit-2k6db\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429445 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429515 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429548 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/805f003b-0500-42d6-9516-373ca8ec2c6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-svc\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429630 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpgr2\" (UniqueName: \"kubernetes.io/projected/805f003b-0500-42d6-9516-373ca8ec2c6a-kube-api-access-rpgr2\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429675 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429701 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dtzw\" (UniqueName: \"kubernetes.io/projected/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-kube-api-access-7dtzw\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429743 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429774 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429812 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429863 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-config\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.429896 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.440424 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/805f003b-0500-42d6-9516-373ca8ec2c6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.440897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.441344 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.441744 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-svc\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.442515 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-config\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.449311 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.451584 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.452880 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.469660 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.470262 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.476303 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.477512 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.478581 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.483178 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpgr2\" (UniqueName: \"kubernetes.io/projected/805f003b-0500-42d6-9516-373ca8ec2c6a-kube-api-access-rpgr2\") pod \"cinder-scheduler-0\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.505852 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dtzw\" (UniqueName: \"kubernetes.io/projected/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-kube-api-access-7dtzw\") pod \"dnsmasq-dns-6578955fd5-nnv99\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.545649 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.578980 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-scripts\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.579086 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.579154 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.579436 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.579495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-logs\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.579636 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfs6\" (UniqueName: \"kubernetes.io/projected/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-kube-api-access-qcfs6\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.580544 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.582564 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.635892 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.668137 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed270752-0ddd-4eff-a26a-cc14184e2898" path="/var/lib/kubelet/pods/ed270752-0ddd-4eff-a26a-cc14184e2898/volumes" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.674822 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.686888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.686983 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-scripts\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.687019 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.687047 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.687279 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.687311 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-logs\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.687398 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcfs6\" (UniqueName: \"kubernetes.io/projected/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-kube-api-access-qcfs6\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.691693 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.698003 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.700214 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-logs\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.702687 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-scripts\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.724889 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcfs6\" (UniqueName: \"kubernetes.io/projected/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-kube-api-access-qcfs6\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.726204 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.761746 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " pod="openstack/cinder-api-0" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.937289 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" event={"ID":"c72f6fb5-b180-4bf5-9b26-ac0608313b83","Type":"ContainerStarted","Data":"08578f55313287260bb35eafd74f8f0cd1a99f422e4ceba90a268ec894957abd"} Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.946813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d494cf4b-vwwg5" event={"ID":"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9","Type":"ContainerStarted","Data":"28aef3b71a4a991157044dc1ebc1590d7f595842db0dd61ac5bd1cff4a1e8910"} Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.946862 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d494cf4b-vwwg5" event={"ID":"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9","Type":"ContainerStarted","Data":"7f7adf3c859ad161e6429329b10921ff10beacd9c014b3cd67ea7dae3d61a573"} Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.969574 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.969609 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:02 crc kubenswrapper[4770]: I1209 14:48:02.970613 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6894fb8f-gdsw2" event={"ID":"0c36a704-9c2f-4761-80b3-45215f34c1f6","Type":"ContainerStarted","Data":"8084dfacfc55ae796ec4164a63be79df867e883ca876627db2c4faefad2056a3"} Dec 09 14:48:03 crc kubenswrapper[4770]: I1209 14:48:03.016943 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 14:48:03 crc kubenswrapper[4770]: I1209 14:48:03.292097 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-2k6db"] Dec 09 14:48:03 crc kubenswrapper[4770]: I1209 14:48:03.651834 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-nnv99"] Dec 09 14:48:03 crc kubenswrapper[4770]: I1209 14:48:03.777136 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:03 crc kubenswrapper[4770]: W1209 14:48:03.891949 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod805f003b_0500_42d6_9516_373ca8ec2c6a.slice/crio-1b1b19f9bf778bb9126f33a07fd149d5508353c1741e72612c82e98fa6a7edf6 WatchSource:0}: Error finding container 1b1b19f9bf778bb9126f33a07fd149d5508353c1741e72612c82e98fa6a7edf6: Status 404 returned error can't find the container with id 1b1b19f9bf778bb9126f33a07fd149d5508353c1741e72612c82e98fa6a7edf6 Dec 09 14:48:03 crc kubenswrapper[4770]: I1209 14:48:03.960523 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:04 crc kubenswrapper[4770]: W1209 14:48:04.019413 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf019764a_3c5f_45da_91b9_4a2d7e48a6d7.slice/crio-bdfbd2b0591e328dc36cab8ba4edfe05983749741fb49471040d3c0e3b133507 WatchSource:0}: Error finding container bdfbd2b0591e328dc36cab8ba4edfe05983749741fb49471040d3c0e3b133507: Status 404 returned error can't find the container with id bdfbd2b0591e328dc36cab8ba4edfe05983749741fb49471040d3c0e3b133507 Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.025278 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"805f003b-0500-42d6-9516-373ca8ec2c6a","Type":"ContainerStarted","Data":"1b1b19f9bf778bb9126f33a07fd149d5508353c1741e72612c82e98fa6a7edf6"} Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.062305 4770 generic.go:334] "Generic (PLEG): container finished" podID="c72f6fb5-b180-4bf5-9b26-ac0608313b83" containerID="de403727e3b54547af79754845d1c8473b79d3b30fed2d7722d5891e033d1744" exitCode=0 Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.062564 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" event={"ID":"c72f6fb5-b180-4bf5-9b26-ac0608313b83","Type":"ContainerDied","Data":"de403727e3b54547af79754845d1c8473b79d3b30fed2d7722d5891e033d1744"} Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.117289 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" event={"ID":"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5","Type":"ContainerStarted","Data":"ebfb862bc3f1ce07056808d0c9e9e17a8137255fb3878670cb4ef5a298b77cd1"} Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.178215 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d494cf4b-vwwg5" event={"ID":"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9","Type":"ContainerStarted","Data":"828a4c59703a982ff503a156e8431849d0dfcfa0cbaed445dcc0c74758099231"} Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.178674 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.178751 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.200018 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.200051 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.203099 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-2k6db" event={"ID":"011902e7-c27f-4298-abd3-93eea4d5c579","Type":"ContainerStarted","Data":"e9c8c2372e3d88bd0ebf9adedc755ef966edd0b442bb925b096e3cd4bda2a85f"} Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.203140 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-2k6db" event={"ID":"011902e7-c27f-4298-abd3-93eea4d5c579","Type":"ContainerStarted","Data":"ef7ce2f68cd82ef8d71bc692b61a3b5c838fd1b416b8163b96fb51c3672151c8"} Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.214866 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d494cf4b-vwwg5" podStartSLOduration=4.21484084 podStartE2EDuration="4.21484084s" podCreationTimestamp="2025-12-09 14:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:04.198706999 +0000 UTC m=+1516.094909135" watchObservedRunningTime="2025-12-09 14:48:04.21484084 +0000 UTC m=+1516.111042976" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.264850 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-2k6db" podStartSLOduration=3.255714152 podStartE2EDuration="3.255714152s" podCreationTimestamp="2025-12-09 14:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:04.227952251 +0000 UTC m=+1516.124154387" watchObservedRunningTime="2025-12-09 14:48:04.255714152 +0000 UTC m=+1516.151916288" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.764805 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.877199 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-sb\") pod \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.877384 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-svc\") pod \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.877420 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdlm5\" (UniqueName: \"kubernetes.io/projected/c72f6fb5-b180-4bf5-9b26-ac0608313b83-kube-api-access-fdlm5\") pod \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.877515 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-config\") pod \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.877555 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-nb\") pod \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.877673 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-swift-storage-0\") pod \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\" (UID: \"c72f6fb5-b180-4bf5-9b26-ac0608313b83\") " Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.922005 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c72f6fb5-b180-4bf5-9b26-ac0608313b83" (UID: "c72f6fb5-b180-4bf5-9b26-ac0608313b83"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.927961 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c72f6fb5-b180-4bf5-9b26-ac0608313b83-kube-api-access-fdlm5" (OuterVolumeSpecName: "kube-api-access-fdlm5") pod "c72f6fb5-b180-4bf5-9b26-ac0608313b83" (UID: "c72f6fb5-b180-4bf5-9b26-ac0608313b83"). InnerVolumeSpecName "kube-api-access-fdlm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.965802 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-config" (OuterVolumeSpecName: "config") pod "c72f6fb5-b180-4bf5-9b26-ac0608313b83" (UID: "c72f6fb5-b180-4bf5-9b26-ac0608313b83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.980425 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdlm5\" (UniqueName: \"kubernetes.io/projected/c72f6fb5-b180-4bf5-9b26-ac0608313b83-kube-api-access-fdlm5\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.980485 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:04 crc kubenswrapper[4770]: I1209 14:48:04.980495 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.111684 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c72f6fb5-b180-4bf5-9b26-ac0608313b83" (UID: "c72f6fb5-b180-4bf5-9b26-ac0608313b83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.165372 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c72f6fb5-b180-4bf5-9b26-ac0608313b83" (UID: "c72f6fb5-b180-4bf5-9b26-ac0608313b83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.193120 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.193150 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.211506 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c72f6fb5-b180-4bf5-9b26-ac0608313b83" (UID: "c72f6fb5-b180-4bf5-9b26-ac0608313b83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.265164 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f019764a-3c5f-45da-91b9-4a2d7e48a6d7","Type":"ContainerStarted","Data":"bdfbd2b0591e328dc36cab8ba4edfe05983749741fb49471040d3c0e3b133507"} Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.278592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" event={"ID":"c72f6fb5-b180-4bf5-9b26-ac0608313b83","Type":"ContainerDied","Data":"08578f55313287260bb35eafd74f8f0cd1a99f422e4ceba90a268ec894957abd"} Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.278653 4770 scope.go:117] "RemoveContainer" containerID="de403727e3b54547af79754845d1c8473b79d3b30fed2d7722d5891e033d1744" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.278838 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-cpb4z" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.295230 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c72f6fb5-b180-4bf5-9b26-ac0608313b83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.311407 4770 generic.go:334] "Generic (PLEG): container finished" podID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" containerID="4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5" exitCode=0 Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.311933 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" event={"ID":"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5","Type":"ContainerDied","Data":"4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5"} Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.412523 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cpb4z"] Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.418309 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cpb4z"] Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.808852 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.809235 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:05 crc kubenswrapper[4770]: I1209 14:48:05.815796 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:06 crc kubenswrapper[4770]: I1209 14:48:06.151917 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 14:48:06 crc kubenswrapper[4770]: I1209 14:48:06.152025 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:06 crc kubenswrapper[4770]: I1209 14:48:06.284246 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 14:48:06 crc kubenswrapper[4770]: I1209 14:48:06.349416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f019764a-3c5f-45da-91b9-4a2d7e48a6d7","Type":"ContainerStarted","Data":"4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4"} Dec 09 14:48:06 crc kubenswrapper[4770]: I1209 14:48:06.435928 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:06 crc kubenswrapper[4770]: I1209 14:48:06.619900 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c72f6fb5-b180-4bf5-9b26-ac0608313b83" path="/var/lib/kubelet/pods/c72f6fb5-b180-4bf5-9b26-ac0608313b83/volumes" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.930224 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dccbff898-2zrvn"] Dec 09 14:48:07 crc kubenswrapper[4770]: E1209 14:48:07.930983 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72f6fb5-b180-4bf5-9b26-ac0608313b83" containerName="init" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.930997 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72f6fb5-b180-4bf5-9b26-ac0608313b83" containerName="init" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.931190 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c72f6fb5-b180-4bf5-9b26-ac0608313b83" containerName="init" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.932317 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.943796 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.944069 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.956211 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dccbff898-2zrvn"] Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.982980 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-config-data-custom\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.983146 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-config-data\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.983212 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-internal-tls-certs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.983238 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-public-tls-certs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.983336 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-logs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.983399 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgbs\" (UniqueName: \"kubernetes.io/projected/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-kube-api-access-tkgbs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:07 crc kubenswrapper[4770]: I1209 14:48:07.983444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-combined-ca-bundle\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.085221 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-public-tls-certs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.085278 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-internal-tls-certs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.085341 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-logs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.085394 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkgbs\" (UniqueName: \"kubernetes.io/projected/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-kube-api-access-tkgbs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.085427 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-combined-ca-bundle\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.085517 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-config-data-custom\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.085604 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-config-data\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.090528 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-logs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.102626 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-public-tls-certs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.103508 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-combined-ca-bundle\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.104368 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-config-data\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.105010 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-internal-tls-certs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.116291 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-config-data-custom\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.116298 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkgbs\" (UniqueName: \"kubernetes.io/projected/1a50151f-6df2-4bd3-b8aa-edb8f5545b2c-kube-api-access-tkgbs\") pod \"barbican-api-7dccbff898-2zrvn\" (UID: \"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c\") " pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.393784 4770 generic.go:334] "Generic (PLEG): container finished" podID="011902e7-c27f-4298-abd3-93eea4d5c579" containerID="e9c8c2372e3d88bd0ebf9adedc755ef966edd0b442bb925b096e3cd4bda2a85f" exitCode=0 Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.393846 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-2k6db" event={"ID":"011902e7-c27f-4298-abd3-93eea4d5c579","Type":"ContainerDied","Data":"e9c8c2372e3d88bd0ebf9adedc755ef966edd0b442bb925b096e3cd4bda2a85f"} Dec 09 14:48:08 crc kubenswrapper[4770]: I1209 14:48:08.452838 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.043994 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dccbff898-2zrvn"] Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.476520 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f019764a-3c5f-45da-91b9-4a2d7e48a6d7","Type":"ContainerStarted","Data":"e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194"} Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.477185 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerName="cinder-api-log" containerID="cri-o://4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4" gracePeriod=30 Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.477481 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.477718 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerName="cinder-api" containerID="cri-o://e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194" gracePeriod=30 Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.537933 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.5379173139999995 podStartE2EDuration="7.537917314s" podCreationTimestamp="2025-12-09 14:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:09.522087232 +0000 UTC m=+1521.418289368" watchObservedRunningTime="2025-12-09 14:48:09.537917314 +0000 UTC m=+1521.434119450" Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.568000 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" event={"ID":"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5","Type":"ContainerStarted","Data":"7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509"} Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.569210 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.588364 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" event={"ID":"a2242028-0a76-456d-b92c-28ccda87972d","Type":"ContainerStarted","Data":"b6bdab9d27c6e16e9ac9040991b9aba397ab8b621cf72642439db1bb6e7ac141"} Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.618885 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6894fb8f-gdsw2" event={"ID":"0c36a704-9c2f-4761-80b3-45215f34c1f6","Type":"ContainerStarted","Data":"15cca131594cd61ff4c50658951a0af30a3e53171e849ea922fca5a5862a4d98"} Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.622213 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" podStartSLOduration=7.622199255 podStartE2EDuration="7.622199255s" podCreationTimestamp="2025-12-09 14:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:09.619162434 +0000 UTC m=+1521.515364570" watchObservedRunningTime="2025-12-09 14:48:09.622199255 +0000 UTC m=+1521.518401391" Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.663077 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccbff898-2zrvn" event={"ID":"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c","Type":"ContainerStarted","Data":"50e0c3ee8dc19af6b54abe037ef83b43963e2339e964bb5946c89fb78ce7d520"} Dec 09 14:48:09 crc kubenswrapper[4770]: I1209 14:48:09.715022 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"805f003b-0500-42d6-9516-373ca8ec2c6a","Type":"ContainerStarted","Data":"38577eba0975ea27662768e990c96a787c65deee4f4697b0969a735ced08e206"} Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.398961 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.459499 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-combined-ca-bundle\") pod \"011902e7-c27f-4298-abd3-93eea4d5c579\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.459851 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-certs\") pod \"011902e7-c27f-4298-abd3-93eea4d5c579\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.459883 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-scripts\") pod \"011902e7-c27f-4298-abd3-93eea4d5c579\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.459948 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-config-data\") pod \"011902e7-c27f-4298-abd3-93eea4d5c579\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.460124 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv9lp\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-kube-api-access-xv9lp\") pod \"011902e7-c27f-4298-abd3-93eea4d5c579\" (UID: \"011902e7-c27f-4298-abd3-93eea4d5c579\") " Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.474885 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-scripts" (OuterVolumeSpecName: "scripts") pod "011902e7-c27f-4298-abd3-93eea4d5c579" (UID: "011902e7-c27f-4298-abd3-93eea4d5c579"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.474908 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-certs" (OuterVolumeSpecName: "certs") pod "011902e7-c27f-4298-abd3-93eea4d5c579" (UID: "011902e7-c27f-4298-abd3-93eea4d5c579"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.475017 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-kube-api-access-xv9lp" (OuterVolumeSpecName: "kube-api-access-xv9lp") pod "011902e7-c27f-4298-abd3-93eea4d5c579" (UID: "011902e7-c27f-4298-abd3-93eea4d5c579"). InnerVolumeSpecName "kube-api-access-xv9lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.543991 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-config-data" (OuterVolumeSpecName: "config-data") pod "011902e7-c27f-4298-abd3-93eea4d5c579" (UID: "011902e7-c27f-4298-abd3-93eea4d5c579"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.564249 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv9lp\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-kube-api-access-xv9lp\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.564285 4770 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/011902e7-c27f-4298-abd3-93eea4d5c579-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.564294 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.564306 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.708125 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "011902e7-c27f-4298-abd3-93eea4d5c579" (UID: "011902e7-c27f-4298-abd3-93eea4d5c579"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.783859 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:10 crc kubenswrapper[4770]: E1209 14:48:10.784336 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011902e7-c27f-4298-abd3-93eea4d5c579" containerName="cloudkitty-storageinit" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.784350 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="011902e7-c27f-4298-abd3-93eea4d5c579" containerName="cloudkitty-storageinit" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.784543 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="011902e7-c27f-4298-abd3-93eea4d5c579" containerName="cloudkitty-storageinit" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.785319 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.794102 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccbff898-2zrvn" event={"ID":"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c","Type":"ContainerStarted","Data":"325490d1c86f7f687ca39ef543c2e9a0ced4bcf08b6e63fb07a8869235c510fc"} Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.794148 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccbff898-2zrvn" event={"ID":"1a50151f-6df2-4bd3-b8aa-edb8f5545b2c","Type":"ContainerStarted","Data":"b5482917b3f27e71b02e0a6c4ed987630fc7797aff0ac387322e18e9ad3c0062"} Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.795116 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.795154 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.795699 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.817119 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011902e7-c27f-4298-abd3-93eea4d5c579-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.848922 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.876885 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-nnv99"] Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.891406 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"805f003b-0500-42d6-9516-373ca8ec2c6a","Type":"ContainerStarted","Data":"ee02a5bcfd0093fc864b110d003babe02fd582af6d5cb7f23bf85a1a56f9251b"} Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.899798 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.901527 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.904995 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.907936 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.921408 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66sj2"] Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.923275 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.927201 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.927302 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.927327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.927440 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47r5s\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-kube-api-access-47r5s\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.927469 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.927985 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-certs\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.935888 4770 generic.go:334] "Generic (PLEG): container finished" podID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerID="4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4" exitCode=143 Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.936017 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f019764a-3c5f-45da-91b9-4a2d7e48a6d7","Type":"ContainerDied","Data":"4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4"} Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.943505 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66sj2"] Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.966597 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" event={"ID":"a2242028-0a76-456d-b92c-28ccda87972d","Type":"ContainerStarted","Data":"fb5afa7f972be234a6d464bb7052d7e5f9beedc9e92d81a2af954f01cb73b6db"} Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.982596 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-2k6db" event={"ID":"011902e7-c27f-4298-abd3-93eea4d5c579","Type":"ContainerDied","Data":"ef7ce2f68cd82ef8d71bc692b61a3b5c838fd1b416b8163b96fb51c3672151c8"} Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.982633 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7ce2f68cd82ef8d71bc692b61a3b5c838fd1b416b8163b96fb51c3672151c8" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.982707 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-2k6db" Dec 09 14:48:10 crc kubenswrapper[4770]: I1209 14:48:10.994228 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dccbff898-2zrvn" podStartSLOduration=3.988454671 podStartE2EDuration="3.988454671s" podCreationTimestamp="2025-12-09 14:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:10.890477164 +0000 UTC m=+1522.786679330" watchObservedRunningTime="2025-12-09 14:48:10.988454671 +0000 UTC m=+1522.884656817" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.005605 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6894fb8f-gdsw2" event={"ID":"0c36a704-9c2f-4761-80b3-45215f34c1f6","Type":"ContainerStarted","Data":"8e37e5089652df323e69ab9db106291f9d108645f0c6bd5f81407bb3e5d1d19b"} Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035270 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035314 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035356 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-svc\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035378 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035406 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47r5s\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-kube-api-access-47r5s\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f09320eb-4edc-44e4-bf8c-9e8681206587-logs\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035447 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035462 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-certs\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035482 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035506 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035541 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035566 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-scripts\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035613 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9m9t\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-kube-api-access-z9m9t\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035629 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-config\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035652 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znkng\" (UniqueName: \"kubernetes.io/projected/afac8544-3931-4a40-bcd4-73e30c638547-kube-api-access-znkng\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035700 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-certs\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035715 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.035755 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.048236 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.049938 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.054080 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-certs\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.055232 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.057583 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.058400 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.001886925 podStartE2EDuration="9.058389399s" podCreationTimestamp="2025-12-09 14:48:02 +0000 UTC" firstStartedPulling="2025-12-09 14:48:03.900088195 +0000 UTC m=+1515.796290331" lastFinishedPulling="2025-12-09 14:48:04.956590669 +0000 UTC m=+1516.852792805" observedRunningTime="2025-12-09 14:48:10.929836106 +0000 UTC m=+1522.826038242" watchObservedRunningTime="2025-12-09 14:48:11.058389399 +0000 UTC m=+1522.954591535" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.071842 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47r5s\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-kube-api-access-47r5s\") pod \"cloudkitty-proc-0\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.108613 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8466dc55b6-5ppcl" podStartSLOduration=6.070223555 podStartE2EDuration="12.10859524s" podCreationTimestamp="2025-12-09 14:47:59 +0000 UTC" firstStartedPulling="2025-12-09 14:48:01.776553526 +0000 UTC m=+1513.672755662" lastFinishedPulling="2025-12-09 14:48:07.814925211 +0000 UTC m=+1519.711127347" observedRunningTime="2025-12-09 14:48:11.005046634 +0000 UTC m=+1522.901248770" watchObservedRunningTime="2025-12-09 14:48:11.10859524 +0000 UTC m=+1523.004797376" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.123682 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d6894fb8f-gdsw2" podStartSLOduration=6.178425604 podStartE2EDuration="12.123665482s" podCreationTimestamp="2025-12-09 14:47:59 +0000 UTC" firstStartedPulling="2025-12-09 14:48:01.864073943 +0000 UTC m=+1513.760276069" lastFinishedPulling="2025-12-09 14:48:07.809313811 +0000 UTC m=+1519.705515947" observedRunningTime="2025-12-09 14:48:11.035106067 +0000 UTC m=+1522.931308203" watchObservedRunningTime="2025-12-09 14:48:11.123665482 +0000 UTC m=+1523.019867618" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.136926 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9m9t\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-kube-api-access-z9m9t\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.136969 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-config\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.136991 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znkng\" (UniqueName: \"kubernetes.io/projected/afac8544-3931-4a40-bcd4-73e30c638547-kube-api-access-znkng\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-certs\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137193 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137279 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137320 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137390 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-svc\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137410 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137453 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f09320eb-4edc-44e4-bf8c-9e8681206587-logs\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137515 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-scripts\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.137529 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.140408 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-config\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.140447 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.142266 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-svc\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.142828 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.145467 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.146322 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f09320eb-4edc-44e4-bf8c-9e8681206587-logs\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.149265 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-certs\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.149594 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.151355 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.151648 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-scripts\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.164268 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9m9t\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-kube-api-access-z9m9t\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.173940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znkng\" (UniqueName: \"kubernetes.io/projected/afac8544-3931-4a40-bcd4-73e30c638547-kube-api-access-znkng\") pod \"dnsmasq-dns-58bd69657f-66sj2\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.174756 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data\") pod \"cloudkitty-api-0\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.191481 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.248290 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.271077 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.713398 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.858348 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-combined-ca-bundle\") pod \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.858680 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-etc-machine-id\") pod \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.858805 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data-custom\") pod \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.858854 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-logs\") pod \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.858941 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data\") pod \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.858969 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcfs6\" (UniqueName: \"kubernetes.io/projected/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-kube-api-access-qcfs6\") pod \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.858964 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f019764a-3c5f-45da-91b9-4a2d7e48a6d7" (UID: "f019764a-3c5f-45da-91b9-4a2d7e48a6d7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.859069 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-scripts\") pod \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\" (UID: \"f019764a-3c5f-45da-91b9-4a2d7e48a6d7\") " Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.859507 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.860185 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-logs" (OuterVolumeSpecName: "logs") pod "f019764a-3c5f-45da-91b9-4a2d7e48a6d7" (UID: "f019764a-3c5f-45da-91b9-4a2d7e48a6d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.873763 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f019764a-3c5f-45da-91b9-4a2d7e48a6d7" (UID: "f019764a-3c5f-45da-91b9-4a2d7e48a6d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.880504 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-kube-api-access-qcfs6" (OuterVolumeSpecName: "kube-api-access-qcfs6") pod "f019764a-3c5f-45da-91b9-4a2d7e48a6d7" (UID: "f019764a-3c5f-45da-91b9-4a2d7e48a6d7"). InnerVolumeSpecName "kube-api-access-qcfs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.880586 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-scripts" (OuterVolumeSpecName: "scripts") pod "f019764a-3c5f-45da-91b9-4a2d7e48a6d7" (UID: "f019764a-3c5f-45da-91b9-4a2d7e48a6d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.901985 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f019764a-3c5f-45da-91b9-4a2d7e48a6d7" (UID: "f019764a-3c5f-45da-91b9-4a2d7e48a6d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.937015 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data" (OuterVolumeSpecName: "config-data") pod "f019764a-3c5f-45da-91b9-4a2d7e48a6d7" (UID: "f019764a-3c5f-45da-91b9-4a2d7e48a6d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.962713 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.962760 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.962770 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.962779 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.962787 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:11 crc kubenswrapper[4770]: I1209 14:48:11.962795 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcfs6\" (UniqueName: \"kubernetes.io/projected/f019764a-3c5f-45da-91b9-4a2d7e48a6d7-kube-api-access-qcfs6\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.044545 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.044826 4770 generic.go:334] "Generic (PLEG): container finished" podID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerID="e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194" exitCode=0 Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.045000 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f019764a-3c5f-45da-91b9-4a2d7e48a6d7","Type":"ContainerDied","Data":"e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194"} Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.045025 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f019764a-3c5f-45da-91b9-4a2d7e48a6d7","Type":"ContainerDied","Data":"bdfbd2b0591e328dc36cab8ba4edfe05983749741fb49471040d3c0e3b133507"} Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.045041 4770 scope.go:117] "RemoveContainer" containerID="e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.046443 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.048877 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" podUID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" containerName="dnsmasq-dns" containerID="cri-o://7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509" gracePeriod=10 Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.165973 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.227456 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.227841 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="ceilometer-central-agent" containerID="cri-o://6fe8597e6f65ed70f8767c2fcea27020bf43d3fc2022e12366552013500f81ba" gracePeriod=30 Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.229516 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="proxy-httpd" containerID="cri-o://936ff86e6b615d46286eb51e9f6b98e02a8db70f6d4a09402c6c2340043b1558" gracePeriod=30 Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.229635 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="sg-core" containerID="cri-o://77a66348c27c97c5e05499508540075e8b6200af2dbc9f699dc87d60de83fdcd" gracePeriod=30 Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.229682 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="ceilometer-notification-agent" containerID="cri-o://34fc267bfba99ef32416451aa3d522773a9d5dec27c46652d45340442c080ae1" gracePeriod=30 Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.240935 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.409947 4770 scope.go:117] "RemoveContainer" containerID="4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.431635 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66sj2"] Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.521535 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.534388 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.546197 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:12 crc kubenswrapper[4770]: E1209 14:48:12.546605 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerName="cinder-api-log" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.546619 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerName="cinder-api-log" Dec 09 14:48:12 crc kubenswrapper[4770]: E1209 14:48:12.546647 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerName="cinder-api" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.546653 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerName="cinder-api" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.546858 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerName="cinder-api" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.546873 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" containerName="cinder-api-log" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.547986 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.550000 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.552138 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.555400 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.561751 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.571913 4770 scope.go:117] "RemoveContainer" containerID="e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194" Dec 09 14:48:12 crc kubenswrapper[4770]: E1209 14:48:12.573170 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194\": container with ID starting with e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194 not found: ID does not exist" containerID="e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.573242 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194"} err="failed to get container status \"e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194\": rpc error: code = NotFound desc = could not find container \"e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194\": container with ID starting with e08289af0ddede05353e4e4f214be8afb22ec6f9a53fbe8e57ef98a7b8026194 not found: ID does not exist" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.573327 4770 scope.go:117] "RemoveContainer" containerID="4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4" Dec 09 14:48:12 crc kubenswrapper[4770]: E1209 14:48:12.577058 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4\": container with ID starting with 4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4 not found: ID does not exist" containerID="4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.577286 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4"} err="failed to get container status \"4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4\": rpc error: code = NotFound desc = could not find container \"4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4\": container with ID starting with 4763098db5027573b340783341ac508beb5047b579707f48d43a9fe2a4346bd4 not found: ID does not exist" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.583993 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.601195 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f019764a-3c5f-45da-91b9-4a2d7e48a6d7" path="/var/lib/kubelet/pods/f019764a-3c5f-45da-91b9-4a2d7e48a6d7/volumes" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.619328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.619382 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-scripts\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.619491 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22mh\" (UniqueName: \"kubernetes.io/projected/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-kube-api-access-r22mh\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.619526 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.619641 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-config-data\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.619671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.619753 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.621131 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-logs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.621211 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723091 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723286 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-scripts\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723388 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r22mh\" (UniqueName: \"kubernetes.io/projected/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-kube-api-access-r22mh\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723421 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723472 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-config-data\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723498 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723550 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723696 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-logs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.723775 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.731497 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.738540 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.741278 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.742022 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-logs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.750471 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-scripts\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.751591 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-config-data\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.752004 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.756301 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.764292 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r22mh\" (UniqueName: \"kubernetes.io/projected/9fad8a60-e1ec-47ed-8aca-46b3aa3319d2-kube-api-access-r22mh\") pod \"cinder-api-0\" (UID: \"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2\") " pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.887231 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 14:48:12 crc kubenswrapper[4770]: I1209 14:48:12.922218 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.031487 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dtzw\" (UniqueName: \"kubernetes.io/projected/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-kube-api-access-7dtzw\") pod \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.031647 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-config\") pod \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.031687 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-svc\") pod \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.031768 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-swift-storage-0\") pod \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.031815 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-sb\") pod \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.031906 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-nb\") pod \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\" (UID: \"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5\") " Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.070330 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-kube-api-access-7dtzw" (OuterVolumeSpecName: "kube-api-access-7dtzw") pod "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" (UID: "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5"). InnerVolumeSpecName "kube-api-access-7dtzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.099076 4770 generic.go:334] "Generic (PLEG): container finished" podID="afac8544-3931-4a40-bcd4-73e30c638547" containerID="76076d51e1ea4da8ceea2338cbdbdb28d7e6f15a7ebc5b083b299965fbb54ce0" exitCode=0 Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.099163 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" event={"ID":"afac8544-3931-4a40-bcd4-73e30c638547","Type":"ContainerDied","Data":"76076d51e1ea4da8ceea2338cbdbdb28d7e6f15a7ebc5b083b299965fbb54ce0"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.099188 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" event={"ID":"afac8544-3931-4a40-bcd4-73e30c638547","Type":"ContainerStarted","Data":"9754d163397458943ab570ecc8454cbf4ed8eaabeb2cb74a560e6e531e7bb118"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.109104 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" (UID: "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.129535 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f09320eb-4edc-44e4-bf8c-9e8681206587","Type":"ContainerStarted","Data":"4b801d5d6fd10b384c54b4946588204224979becd55a39a1f0edf6dce3ab594b"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.129576 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f09320eb-4edc-44e4-bf8c-9e8681206587","Type":"ContainerStarted","Data":"f5f4100f0b185827ea03a221945b40061cb3ed0e043680239ad1bee2371f3240"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.135283 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dtzw\" (UniqueName: \"kubernetes.io/projected/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-kube-api-access-7dtzw\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.135499 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.143365 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-config" (OuterVolumeSpecName: "config") pod "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" (UID: "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.150947 4770 generic.go:334] "Generic (PLEG): container finished" podID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerID="936ff86e6b615d46286eb51e9f6b98e02a8db70f6d4a09402c6c2340043b1558" exitCode=0 Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.150976 4770 generic.go:334] "Generic (PLEG): container finished" podID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerID="77a66348c27c97c5e05499508540075e8b6200af2dbc9f699dc87d60de83fdcd" exitCode=2 Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.150983 4770 generic.go:334] "Generic (PLEG): container finished" podID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerID="6fe8597e6f65ed70f8767c2fcea27020bf43d3fc2022e12366552013500f81ba" exitCode=0 Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.151030 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerDied","Data":"936ff86e6b615d46286eb51e9f6b98e02a8db70f6d4a09402c6c2340043b1558"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.151066 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerDied","Data":"77a66348c27c97c5e05499508540075e8b6200af2dbc9f699dc87d60de83fdcd"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.151075 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerDied","Data":"6fe8597e6f65ed70f8767c2fcea27020bf43d3fc2022e12366552013500f81ba"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.183220 4770 generic.go:334] "Generic (PLEG): container finished" podID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" containerID="7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509" exitCode=0 Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.183278 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" (UID: "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.183350 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.183367 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" event={"ID":"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5","Type":"ContainerDied","Data":"7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.184884 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-nnv99" event={"ID":"6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5","Type":"ContainerDied","Data":"ebfb862bc3f1ce07056808d0c9e9e17a8137255fb3878670cb4ef5a298b77cd1"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.184920 4770 scope.go:117] "RemoveContainer" containerID="7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.201448 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8608e197-8cbc-4c1a-ac37-648bdb076ebe","Type":"ContainerStarted","Data":"f8c732f9846b8f3efe5b944cc08f712c8afff58bec403eafd117e392c1ec4dff"} Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.209602 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" (UID: "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.237892 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" (UID: "6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.238450 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.238705 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.238748 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.341268 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.423651 4770 scope.go:117] "RemoveContainer" containerID="4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.500045 4770 scope.go:117] "RemoveContainer" containerID="7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509" Dec 09 14:48:13 crc kubenswrapper[4770]: E1209 14:48:13.500481 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509\": container with ID starting with 7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509 not found: ID does not exist" containerID="7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.500516 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509"} err="failed to get container status \"7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509\": rpc error: code = NotFound desc = could not find container \"7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509\": container with ID starting with 7db0ffea658eab34f289fca1273dbdffeaee05326bfc5471d6dd77758a8b5509 not found: ID does not exist" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.500536 4770 scope.go:117] "RemoveContainer" containerID="4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5" Dec 09 14:48:13 crc kubenswrapper[4770]: E1209 14:48:13.504847 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5\": container with ID starting with 4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5 not found: ID does not exist" containerID="4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.504886 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5"} err="failed to get container status \"4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5\": rpc error: code = NotFound desc = could not find container \"4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5\": container with ID starting with 4bcbc6053a1fc0522779184e36d4fb64fb91f5d708b764b1e2fd891a1a19daa5 not found: ID does not exist" Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.562337 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-nnv99"] Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.631951 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-nnv99"] Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.759755 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 14:48:13 crc kubenswrapper[4770]: I1209 14:48:13.898061 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:14 crc kubenswrapper[4770]: I1209 14:48:14.060513 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:14 crc kubenswrapper[4770]: I1209 14:48:14.248348 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f09320eb-4edc-44e4-bf8c-9e8681206587","Type":"ContainerStarted","Data":"ec5562f009fc786679854be2469a7c6437d83c104824686e0cc7696c2ed0b6d8"} Dec 09 14:48:14 crc kubenswrapper[4770]: I1209 14:48:14.249837 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 09 14:48:14 crc kubenswrapper[4770]: I1209 14:48:14.264273 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2","Type":"ContainerStarted","Data":"c93939ff2bdcff8851ff2986bf0be7858cf458fe1e341bc9e356ea3dc5cd31e8"} Dec 09 14:48:14 crc kubenswrapper[4770]: I1209 14:48:14.269117 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=4.269102841 podStartE2EDuration="4.269102841s" podCreationTimestamp="2025-12-09 14:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:14.267143639 +0000 UTC m=+1526.163345775" watchObservedRunningTime="2025-12-09 14:48:14.269102841 +0000 UTC m=+1526.165304977" Dec 09 14:48:14 crc kubenswrapper[4770]: I1209 14:48:14.302093 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" event={"ID":"afac8544-3931-4a40-bcd4-73e30c638547","Type":"ContainerStarted","Data":"da5206a530407a6f7401425c692eb3ca82a14ced0168e8e498b112b83fcfb601"} Dec 09 14:48:14 crc kubenswrapper[4770]: I1209 14:48:14.326259 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" podStartSLOduration=4.326240198 podStartE2EDuration="4.326240198s" podCreationTimestamp="2025-12-09 14:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:14.322260671 +0000 UTC m=+1526.218462807" watchObservedRunningTime="2025-12-09 14:48:14.326240198 +0000 UTC m=+1526.222442334" Dec 09 14:48:14 crc kubenswrapper[4770]: I1209 14:48:14.615319 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" path="/var/lib/kubelet/pods/6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5/volumes" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.221954 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.277479 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-dccdd6975-6g8sl"] Dec 09 14:48:15 crc kubenswrapper[4770]: E1209 14:48:15.277892 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" containerName="dnsmasq-dns" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.277909 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" containerName="dnsmasq-dns" Dec 09 14:48:15 crc kubenswrapper[4770]: E1209 14:48:15.277932 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" containerName="init" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.277938 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" containerName="init" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.278136 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4e981d-4228-4cfa-ad8d-a0d44d9a87b5" containerName="dnsmasq-dns" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.279349 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.281798 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.283653 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.284340 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.312392 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-dccdd6975-6g8sl"] Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.318587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2","Type":"ContainerStarted","Data":"9b3866e008f7c1e4fd888b37b17a8814a380fddf39c70d6c0fa101ba8e076c9c"} Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.318774 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.415843 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-internal-tls-certs\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.415968 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9e75e98-4fff-4755-9908-1e0d4ac982bb-etc-swift\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.416693 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-config-data\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.416839 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e75e98-4fff-4755-9908-1e0d4ac982bb-run-httpd\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.416924 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e75e98-4fff-4755-9908-1e0d4ac982bb-log-httpd\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.416988 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phsst\" (UniqueName: \"kubernetes.io/projected/e9e75e98-4fff-4755-9908-1e0d4ac982bb-kube-api-access-phsst\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.417034 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-combined-ca-bundle\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.417131 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-public-tls-certs\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.519813 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-config-data\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.519894 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e75e98-4fff-4755-9908-1e0d4ac982bb-run-httpd\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.519928 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e75e98-4fff-4755-9908-1e0d4ac982bb-log-httpd\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.520146 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phsst\" (UniqueName: \"kubernetes.io/projected/e9e75e98-4fff-4755-9908-1e0d4ac982bb-kube-api-access-phsst\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.520174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-combined-ca-bundle\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.520220 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-public-tls-certs\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.520293 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-internal-tls-certs\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.520326 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9e75e98-4fff-4755-9908-1e0d4ac982bb-etc-swift\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.521186 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e75e98-4fff-4755-9908-1e0d4ac982bb-run-httpd\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.521486 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e75e98-4fff-4755-9908-1e0d4ac982bb-log-httpd\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.530447 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-combined-ca-bundle\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.532435 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-public-tls-certs\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.540543 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-internal-tls-certs\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.543814 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e75e98-4fff-4755-9908-1e0d4ac982bb-config-data\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.547402 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phsst\" (UniqueName: \"kubernetes.io/projected/e9e75e98-4fff-4755-9908-1e0d4ac982bb-kube-api-access-phsst\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.562158 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9e75e98-4fff-4755-9908-1e0d4ac982bb-etc-swift\") pod \"swift-proxy-dccdd6975-6g8sl\" (UID: \"e9e75e98-4fff-4755-9908-1e0d4ac982bb\") " pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:15 crc kubenswrapper[4770]: I1209 14:48:15.604618 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:16 crc kubenswrapper[4770]: I1209 14:48:16.344755 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerName="cloudkitty-api-log" containerID="cri-o://4b801d5d6fd10b384c54b4946588204224979becd55a39a1f0edf6dce3ab594b" gracePeriod=30 Dec 09 14:48:16 crc kubenswrapper[4770]: I1209 14:48:16.344939 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerName="cloudkitty-api" containerID="cri-o://ec5562f009fc786679854be2469a7c6437d83c104824686e0cc7696c2ed0b6d8" gracePeriod=30 Dec 09 14:48:16 crc kubenswrapper[4770]: E1209 14:48:16.574909 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb6e2c6c_8e5b_4cb5_bec7_4000481b2161.slice/crio-34fc267bfba99ef32416451aa3d522773a9d5dec27c46652d45340442c080ae1.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:48:16 crc kubenswrapper[4770]: I1209 14:48:16.726151 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-dccdd6975-6g8sl"] Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.365184 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dccdd6975-6g8sl" event={"ID":"e9e75e98-4fff-4755-9908-1e0d4ac982bb","Type":"ContainerStarted","Data":"d9526b02f7bff50ccf6f95b6617916fadd2097f12bb8ecce9c4c824e64ddc847"} Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.365706 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dccdd6975-6g8sl" event={"ID":"e9e75e98-4fff-4755-9908-1e0d4ac982bb","Type":"ContainerStarted","Data":"1a03dc509392a582790706b4feadd392aaa50198b00937cc19daf0b4e9a3346e"} Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.382974 4770 generic.go:334] "Generic (PLEG): container finished" podID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerID="34fc267bfba99ef32416451aa3d522773a9d5dec27c46652d45340442c080ae1" exitCode=0 Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.383031 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerDied","Data":"34fc267bfba99ef32416451aa3d522773a9d5dec27c46652d45340442c080ae1"} Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.385276 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fad8a60-e1ec-47ed-8aca-46b3aa3319d2","Type":"ContainerStarted","Data":"60c4a90a9caf69c0944a37aaa93596c259164dc261dd154c2ac1f116125b0439"} Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.386452 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.388492 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8608e197-8cbc-4c1a-ac37-648bdb076ebe","Type":"ContainerStarted","Data":"e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680"} Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.391085 4770 generic.go:334] "Generic (PLEG): container finished" podID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerID="ec5562f009fc786679854be2469a7c6437d83c104824686e0cc7696c2ed0b6d8" exitCode=0 Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.391102 4770 generic.go:334] "Generic (PLEG): container finished" podID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerID="4b801d5d6fd10b384c54b4946588204224979becd55a39a1f0edf6dce3ab594b" exitCode=143 Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.391119 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f09320eb-4edc-44e4-bf8c-9e8681206587","Type":"ContainerDied","Data":"ec5562f009fc786679854be2469a7c6437d83c104824686e0cc7696c2ed0b6d8"} Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.391136 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f09320eb-4edc-44e4-bf8c-9e8681206587","Type":"ContainerDied","Data":"4b801d5d6fd10b384c54b4946588204224979becd55a39a1f0edf6dce3ab594b"} Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.414024 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.413993337 podStartE2EDuration="5.413993337s" podCreationTimestamp="2025-12-09 14:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:17.413124523 +0000 UTC m=+1529.309326659" watchObservedRunningTime="2025-12-09 14:48:17.413993337 +0000 UTC m=+1529.310195473" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.552866 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.646177835 podStartE2EDuration="7.552840324s" podCreationTimestamp="2025-12-09 14:48:10 +0000 UTC" firstStartedPulling="2025-12-09 14:48:12.065259637 +0000 UTC m=+1523.961461773" lastFinishedPulling="2025-12-09 14:48:15.971922126 +0000 UTC m=+1527.868124262" observedRunningTime="2025-12-09 14:48:17.437672868 +0000 UTC m=+1529.333875004" watchObservedRunningTime="2025-12-09 14:48:17.552840324 +0000 UTC m=+1529.449042460" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.591693 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.711235 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.723421 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.885060 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894285 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f09320eb-4edc-44e4-bf8c-9e8681206587-logs\") pod \"f09320eb-4edc-44e4-bf8c-9e8681206587\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894363 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9m9t\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-kube-api-access-z9m9t\") pod \"f09320eb-4edc-44e4-bf8c-9e8681206587\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894445 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-config-data\") pod \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894487 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-scripts\") pod \"f09320eb-4edc-44e4-bf8c-9e8681206587\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894530 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-combined-ca-bundle\") pod \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894560 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data-custom\") pod \"f09320eb-4edc-44e4-bf8c-9e8681206587\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894597 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-sg-core-conf-yaml\") pod \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894621 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-run-httpd\") pod \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894645 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-scripts\") pod \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894668 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-log-httpd\") pod \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894776 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-certs\") pod \"f09320eb-4edc-44e4-bf8c-9e8681206587\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894833 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz2nl\" (UniqueName: \"kubernetes.io/projected/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-kube-api-access-qz2nl\") pod \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\" (UID: \"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894869 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data\") pod \"f09320eb-4edc-44e4-bf8c-9e8681206587\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894899 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-combined-ca-bundle\") pod \"f09320eb-4edc-44e4-bf8c-9e8681206587\" (UID: \"f09320eb-4edc-44e4-bf8c-9e8681206587\") " Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.894924 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09320eb-4edc-44e4-bf8c-9e8681206587-logs" (OuterVolumeSpecName: "logs") pod "f09320eb-4edc-44e4-bf8c-9e8681206587" (UID: "f09320eb-4edc-44e4-bf8c-9e8681206587"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.895415 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f09320eb-4edc-44e4-bf8c-9e8681206587-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.897199 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" (UID: "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.897427 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" (UID: "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.909260 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-certs" (OuterVolumeSpecName: "certs") pod "f09320eb-4edc-44e4-bf8c-9e8681206587" (UID: "f09320eb-4edc-44e4-bf8c-9e8681206587"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.912970 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f09320eb-4edc-44e4-bf8c-9e8681206587" (UID: "f09320eb-4edc-44e4-bf8c-9e8681206587"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.916119 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-scripts" (OuterVolumeSpecName: "scripts") pod "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" (UID: "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.917807 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-kube-api-access-qz2nl" (OuterVolumeSpecName: "kube-api-access-qz2nl") pod "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" (UID: "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161"). InnerVolumeSpecName "kube-api-access-qz2nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.929915 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-kube-api-access-z9m9t" (OuterVolumeSpecName: "kube-api-access-z9m9t") pod "f09320eb-4edc-44e4-bf8c-9e8681206587" (UID: "f09320eb-4edc-44e4-bf8c-9e8681206587"). InnerVolumeSpecName "kube-api-access-z9m9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.930024 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-scripts" (OuterVolumeSpecName: "scripts") pod "f09320eb-4edc-44e4-bf8c-9e8681206587" (UID: "f09320eb-4edc-44e4-bf8c-9e8681206587"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.942338 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.974800 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data" (OuterVolumeSpecName: "config-data") pod "f09320eb-4edc-44e4-bf8c-9e8681206587" (UID: "f09320eb-4edc-44e4-bf8c-9e8681206587"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997815 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997854 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997867 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997880 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997892 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997902 4770 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997913 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz2nl\" (UniqueName: \"kubernetes.io/projected/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-kube-api-access-qz2nl\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997925 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:17 crc kubenswrapper[4770]: I1209 14:48:17.997937 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9m9t\" (UniqueName: \"kubernetes.io/projected/f09320eb-4edc-44e4-bf8c-9e8681206587-kube-api-access-z9m9t\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.028910 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" (UID: "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.029375 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f09320eb-4edc-44e4-bf8c-9e8681206587" (UID: "f09320eb-4edc-44e4-bf8c-9e8681206587"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.092429 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" (UID: "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.102196 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09320eb-4edc-44e4-bf8c-9e8681206587-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.102228 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.102238 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.125873 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-config-data" (OuterVolumeSpecName: "config-data") pod "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" (UID: "bb6e2c6c-8e5b-4cb5-bec7-4000481b2161"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.204689 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.411911 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb6e2c6c-8e5b-4cb5-bec7-4000481b2161","Type":"ContainerDied","Data":"b400cce9cb794cb4f7b48b6c0efac5cce823ef848c31edf7e33f92602a0d148c"} Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.411946 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.411978 4770 scope.go:117] "RemoveContainer" containerID="936ff86e6b615d46286eb51e9f6b98e02a8db70f6d4a09402c6c2340043b1558" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.417156 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dccdd6975-6g8sl" event={"ID":"e9e75e98-4fff-4755-9908-1e0d4ac982bb","Type":"ContainerStarted","Data":"59b407dcfd82291c7a8e82af7c95bea185c7b4c574f9f0b372917092d62df9e2"} Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.417695 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.419240 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.425530 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerName="cinder-scheduler" containerID="cri-o://38577eba0975ea27662768e990c96a787c65deee4f4697b0969a735ced08e206" gracePeriod=30 Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.425644 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f09320eb-4edc-44e4-bf8c-9e8681206587","Type":"ContainerDied","Data":"f5f4100f0b185827ea03a221945b40061cb3ed0e043680239ad1bee2371f3240"} Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.425751 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerName="probe" containerID="cri-o://ee02a5bcfd0093fc864b110d003babe02fd582af6d5cb7f23bf85a1a56f9251b" gracePeriod=30 Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.425864 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.439267 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-dccdd6975-6g8sl" podStartSLOduration=3.439248596 podStartE2EDuration="3.439248596s" podCreationTimestamp="2025-12-09 14:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:18.435959798 +0000 UTC m=+1530.332161944" watchObservedRunningTime="2025-12-09 14:48:18.439248596 +0000 UTC m=+1530.335450732" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.535890 4770 scope.go:117] "RemoveContainer" containerID="77a66348c27c97c5e05499508540075e8b6200af2dbc9f699dc87d60de83fdcd" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.536072 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.564049 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.581837 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.603051 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" path="/var/lib/kubelet/pods/bb6e2c6c-8e5b-4cb5-bec7-4000481b2161/volumes" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.604021 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:18 crc kubenswrapper[4770]: E1209 14:48:18.604420 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="sg-core" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.604445 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="sg-core" Dec 09 14:48:18 crc kubenswrapper[4770]: E1209 14:48:18.604465 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="ceilometer-notification-agent" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.604474 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="ceilometer-notification-agent" Dec 09 14:48:18 crc kubenswrapper[4770]: E1209 14:48:18.604499 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="proxy-httpd" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.604506 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="proxy-httpd" Dec 09 14:48:18 crc kubenswrapper[4770]: E1209 14:48:18.604527 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="ceilometer-central-agent" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.604536 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="ceilometer-central-agent" Dec 09 14:48:18 crc kubenswrapper[4770]: E1209 14:48:18.604554 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerName="cloudkitty-api" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.604563 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerName="cloudkitty-api" Dec 09 14:48:18 crc kubenswrapper[4770]: E1209 14:48:18.604581 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerName="cloudkitty-api-log" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.604588 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerName="cloudkitty-api-log" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.605277 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="ceilometer-notification-agent" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.605312 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="sg-core" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.605330 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="ceilometer-central-agent" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.605348 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerName="cloudkitty-api-log" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.605363 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" containerName="cloudkitty-api" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.605381 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6e2c6c-8e5b-4cb5-bec7-4000481b2161" containerName="proxy-httpd" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.608142 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.612148 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.612361 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.615320 4770 scope.go:117] "RemoveContainer" containerID="34fc267bfba99ef32416451aa3d522773a9d5dec27c46652d45340442c080ae1" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.632555 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.656023 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.666946 4770 scope.go:117] "RemoveContainer" containerID="6fe8597e6f65ed70f8767c2fcea27020bf43d3fc2022e12366552013500f81ba" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.667130 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.669610 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.675086 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.675427 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.675697 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.681111 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.730213 4770 scope.go:117] "RemoveContainer" containerID="ec5562f009fc786679854be2469a7c6437d83c104824686e0cc7696c2ed0b6d8" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.730245 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.730296 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmq7w\" (UniqueName: \"kubernetes.io/projected/a6644d11-e035-4b8f-919f-698a54dd3983-kube-api-access-mmq7w\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.730327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-scripts\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.730434 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-log-httpd\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.730555 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-run-httpd\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.730721 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-config-data\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.731849 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.777620 4770 scope.go:117] "RemoveContainer" containerID="4b801d5d6fd10b384c54b4946588204224979becd55a39a1f0edf6dce3ab594b" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.833760 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.833833 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-scripts\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.833863 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-config-data\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.833915 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.833957 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmq7w\" (UniqueName: \"kubernetes.io/projected/a6644d11-e035-4b8f-919f-698a54dd3983-kube-api-access-mmq7w\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.833990 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-scripts\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834023 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwp9\" (UniqueName: \"kubernetes.io/projected/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-kube-api-access-djwp9\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834056 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-log-httpd\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834102 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-run-httpd\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834149 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834190 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-config-data\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834257 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834288 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-logs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834341 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834371 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.834414 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.835761 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-log-httpd\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.837017 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-run-httpd\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.840355 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-scripts\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.844930 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-config-data\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.845238 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.850332 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.857551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmq7w\" (UniqueName: \"kubernetes.io/projected/a6644d11-e035-4b8f-919f-698a54dd3983-kube-api-access-mmq7w\") pod \"ceilometer-0\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.936716 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwp9\" (UniqueName: \"kubernetes.io/projected/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-kube-api-access-djwp9\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.936829 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.936911 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-logs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.936948 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.936966 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.936992 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.937010 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.937040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-scripts\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.937059 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-config-data\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.937772 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-logs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.940591 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.942229 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.942947 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-config-data\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.943439 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.944182 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.951419 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.958658 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.963288 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-scripts\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.970442 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwp9\" (UniqueName: \"kubernetes.io/projected/d2b8d36f-0bd8-4ac8-b673-15f5728d0a78-kube-api-access-djwp9\") pod \"cloudkitty-api-0\" (UID: \"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78\") " pod="openstack/cloudkitty-api-0" Dec 09 14:48:18 crc kubenswrapper[4770]: I1209 14:48:18.990996 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 14:48:19 crc kubenswrapper[4770]: I1209 14:48:19.443173 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="8608e197-8cbc-4c1a-ac37-648bdb076ebe" containerName="cloudkitty-proc" containerID="cri-o://e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680" gracePeriod=30 Dec 09 14:48:19 crc kubenswrapper[4770]: I1209 14:48:19.503245 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:19 crc kubenswrapper[4770]: W1209 14:48:19.514412 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6644d11_e035_4b8f_919f_698a54dd3983.slice/crio-b18e29adc31593903aed53142c97a2f5e771dc62508d183051b7b738cd9c9e1d WatchSource:0}: Error finding container b18e29adc31593903aed53142c97a2f5e771dc62508d183051b7b738cd9c9e1d: Status 404 returned error can't find the container with id b18e29adc31593903aed53142c97a2f5e771dc62508d183051b7b738cd9c9e1d Dec 09 14:48:19 crc kubenswrapper[4770]: I1209 14:48:19.649498 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 14:48:19 crc kubenswrapper[4770]: W1209 14:48:19.665018 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2b8d36f_0bd8_4ac8_b673_15f5728d0a78.slice/crio-4146d0a844eca62b111e9d89ed41d67428d8fe8fc34321092ed7ff17be2dcceb WatchSource:0}: Error finding container 4146d0a844eca62b111e9d89ed41d67428d8fe8fc34321092ed7ff17be2dcceb: Status 404 returned error can't find the container with id 4146d0a844eca62b111e9d89ed41d67428d8fe8fc34321092ed7ff17be2dcceb Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.473309 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78","Type":"ContainerStarted","Data":"23a3895fc3c97f0ca619e2050446f827379b805e57201f9117901e4d48fca312"} Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.473582 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78","Type":"ContainerStarted","Data":"4146d0a844eca62b111e9d89ed41d67428d8fe8fc34321092ed7ff17be2dcceb"} Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.511032 4770 generic.go:334] "Generic (PLEG): container finished" podID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerID="ee02a5bcfd0093fc864b110d003babe02fd582af6d5cb7f23bf85a1a56f9251b" exitCode=0 Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.511125 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"805f003b-0500-42d6-9516-373ca8ec2c6a","Type":"ContainerDied","Data":"ee02a5bcfd0093fc864b110d003babe02fd582af6d5cb7f23bf85a1a56f9251b"} Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.526169 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerStarted","Data":"b18e29adc31593903aed53142c97a2f5e771dc62508d183051b7b738cd9c9e1d"} Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.605059 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09320eb-4edc-44e4-bf8c-9e8681206587" path="/var/lib/kubelet/pods/f09320eb-4edc-44e4-bf8c-9e8681206587/volumes" Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.630329 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.824264 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dccbff898-2zrvn" Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.876984 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d494cf4b-vwwg5"] Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.877231 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d494cf4b-vwwg5" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api-log" containerID="cri-o://28aef3b71a4a991157044dc1ebc1590d7f595842db0dd61ac5bd1cff4a1e8910" gracePeriod=30 Dec 09 14:48:20 crc kubenswrapper[4770]: I1209 14:48:20.877379 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d494cf4b-vwwg5" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api" containerID="cri-o://828a4c59703a982ff503a156e8431849d0dfcfa0cbaed445dcc0c74758099231" gracePeriod=30 Dec 09 14:48:21 crc kubenswrapper[4770]: I1209 14:48:21.272878 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:48:21 crc kubenswrapper[4770]: I1209 14:48:21.362509 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9mw5m"] Dec 09 14:48:21 crc kubenswrapper[4770]: I1209 14:48:21.362748 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" podUID="17af40df-932c-4a21-8463-5d60c1335003" containerName="dnsmasq-dns" containerID="cri-o://eb82728c68a6ad911d750f0b3dbba376cc511c9d7e41717998f951aadf851a33" gracePeriod=10 Dec 09 14:48:21 crc kubenswrapper[4770]: I1209 14:48:21.553574 4770 generic.go:334] "Generic (PLEG): container finished" podID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerID="28aef3b71a4a991157044dc1ebc1590d7f595842db0dd61ac5bd1cff4a1e8910" exitCode=143 Dec 09 14:48:21 crc kubenswrapper[4770]: I1209 14:48:21.553896 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d494cf4b-vwwg5" event={"ID":"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9","Type":"ContainerDied","Data":"28aef3b71a4a991157044dc1ebc1590d7f595842db0dd61ac5bd1cff4a1e8910"} Dec 09 14:48:21 crc kubenswrapper[4770]: I1209 14:48:21.567335 4770 generic.go:334] "Generic (PLEG): container finished" podID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerID="38577eba0975ea27662768e990c96a787c65deee4f4697b0969a735ced08e206" exitCode=0 Dec 09 14:48:21 crc kubenswrapper[4770]: I1209 14:48:21.567402 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"805f003b-0500-42d6-9516-373ca8ec2c6a","Type":"ContainerDied","Data":"38577eba0975ea27662768e990c96a787c65deee4f4697b0969a735ced08e206"} Dec 09 14:48:22 crc kubenswrapper[4770]: I1209 14:48:22.581060 4770 generic.go:334] "Generic (PLEG): container finished" podID="17af40df-932c-4a21-8463-5d60c1335003" containerID="eb82728c68a6ad911d750f0b3dbba376cc511c9d7e41717998f951aadf851a33" exitCode=0 Dec 09 14:48:22 crc kubenswrapper[4770]: I1209 14:48:22.582347 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" event={"ID":"17af40df-932c-4a21-8463-5d60c1335003","Type":"ContainerDied","Data":"eb82728c68a6ad911d750f0b3dbba376cc511c9d7e41717998f951aadf851a33"} Dec 09 14:48:23 crc kubenswrapper[4770]: I1209 14:48:23.197207 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:48:24 crc kubenswrapper[4770]: I1209 14:48:24.325117 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d494cf4b-vwwg5" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:38158->10.217.0.179:9311: read: connection reset by peer" Dec 09 14:48:24 crc kubenswrapper[4770]: I1209 14:48:24.325125 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d494cf4b-vwwg5" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:38144->10.217.0.179:9311: read: connection reset by peer" Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.451951 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6fcb9dd595-tqt29" Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.519290 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cb5d9f9bd-qssdr"] Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.519503 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cb5d9f9bd-qssdr" podUID="ba146650-9074-423d-aa8f-9cded3a49030" containerName="neutron-api" containerID="cri-o://329e6d43c176fdcd2138c8fc42c34bbe2d96bae5abcfab163ed371ccc8f5d9c4" gracePeriod=30 Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.519932 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cb5d9f9bd-qssdr" podUID="ba146650-9074-423d-aa8f-9cded3a49030" containerName="neutron-httpd" containerID="cri-o://5d8674029ae744bd48620e8d7354335e652bcddaf85cf30b436b0516857a1007" gracePeriod=30 Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.616089 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.620402 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-dccdd6975-6g8sl" Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.687491 4770 generic.go:334] "Generic (PLEG): container finished" podID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerID="828a4c59703a982ff503a156e8431849d0dfcfa0cbaed445dcc0c74758099231" exitCode=0 Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.688166 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d494cf4b-vwwg5" event={"ID":"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9","Type":"ContainerDied","Data":"828a4c59703a982ff503a156e8431849d0dfcfa0cbaed445dcc0c74758099231"} Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.975691 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d494cf4b-vwwg5" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": dial tcp 10.217.0.179:9311: connect: connection refused" Dec 09 14:48:25 crc kubenswrapper[4770]: I1209 14:48:25.976051 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d494cf4b-vwwg5" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": dial tcp 10.217.0.179:9311: connect: connection refused" Dec 09 14:48:26 crc kubenswrapper[4770]: I1209 14:48:26.035203 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" podUID="17af40df-932c-4a21-8463-5d60c1335003" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Dec 09 14:48:26 crc kubenswrapper[4770]: I1209 14:48:26.714677 4770 generic.go:334] "Generic (PLEG): container finished" podID="ba146650-9074-423d-aa8f-9cded3a49030" containerID="5d8674029ae744bd48620e8d7354335e652bcddaf85cf30b436b0516857a1007" exitCode=0 Dec 09 14:48:26 crc kubenswrapper[4770]: I1209 14:48:26.714800 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb5d9f9bd-qssdr" event={"ID":"ba146650-9074-423d-aa8f-9cded3a49030","Type":"ContainerDied","Data":"5d8674029ae744bd48620e8d7354335e652bcddaf85cf30b436b0516857a1007"} Dec 09 14:48:26 crc kubenswrapper[4770]: I1209 14:48:26.870145 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 09 14:48:27 crc kubenswrapper[4770]: I1209 14:48:27.188121 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:28 crc kubenswrapper[4770]: I1209 14:48:28.896561 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:48:28 crc kubenswrapper[4770]: I1209 14:48:28.896818 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerName="glance-log" containerID="cri-o://cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c" gracePeriod=30 Dec 09 14:48:28 crc kubenswrapper[4770]: I1209 14:48:28.897285 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerName="glance-httpd" containerID="cri-o://437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b" gracePeriod=30 Dec 09 14:48:29 crc kubenswrapper[4770]: E1209 14:48:29.398992 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 09 14:48:29 crc kubenswrapper[4770]: E1209 14:48:29.399460 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h5c7h5bbh599h99h5ch564h4h5fbh5ch64bh9dh5c5h675h5cch679h575h55ch78h8fh55h5bh86h569hdh59dhf5h76h5bdh5dbh8ch65q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrfvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(e82b342d-8682-4892-836b-6248fcea0d3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:48:29 crc kubenswrapper[4770]: E1209 14:48:29.401324 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="e82b342d-8682-4892-836b-6248fcea0d3f" Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.771006 4770 generic.go:334] "Generic (PLEG): container finished" podID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerID="cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c" exitCode=143 Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.771822 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91d4d708-1c18-4827-92eb-349b5eaf6d2f","Type":"ContainerDied","Data":"cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c"} Dec 09 14:48:29 crc kubenswrapper[4770]: E1209 14:48:29.774237 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="e82b342d-8682-4892-836b-6248fcea0d3f" Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.840520 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.944323 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-combined-ca-bundle\") pod \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.944461 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-logs\") pod \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.944536 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data-custom\") pod \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.944691 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data\") pod \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.944747 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlq8d\" (UniqueName: \"kubernetes.io/projected/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-kube-api-access-tlq8d\") pod \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\" (UID: \"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9\") " Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.947312 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-logs" (OuterVolumeSpecName: "logs") pod "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" (UID: "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.951928 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-kube-api-access-tlq8d" (OuterVolumeSpecName: "kube-api-access-tlq8d") pod "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" (UID: "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9"). InnerVolumeSpecName "kube-api-access-tlq8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:29 crc kubenswrapper[4770]: I1209 14:48:29.968230 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" (UID: "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.005870 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" (UID: "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.012303 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.014718 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.022447 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data" (OuterVolumeSpecName: "config-data") pod "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" (UID: "2f2e269a-8b4e-4255-ac69-daea1cb7f8a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.046993 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.047026 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.047036 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlq8d\" (UniqueName: \"kubernetes.io/projected/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-kube-api-access-tlq8d\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.047046 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.047055 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.148931 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/805f003b-0500-42d6-9516-373ca8ec2c6a-etc-machine-id\") pod \"805f003b-0500-42d6-9516-373ca8ec2c6a\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149016 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpgr2\" (UniqueName: \"kubernetes.io/projected/805f003b-0500-42d6-9516-373ca8ec2c6a-kube-api-access-rpgr2\") pod \"805f003b-0500-42d6-9516-373ca8ec2c6a\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149066 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-combined-ca-bundle\") pod \"805f003b-0500-42d6-9516-373ca8ec2c6a\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149125 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-sb\") pod \"17af40df-932c-4a21-8463-5d60c1335003\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149149 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-svc\") pod \"17af40df-932c-4a21-8463-5d60c1335003\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149169 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data\") pod \"805f003b-0500-42d6-9516-373ca8ec2c6a\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149189 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s45gn\" (UniqueName: \"kubernetes.io/projected/17af40df-932c-4a21-8463-5d60c1335003-kube-api-access-s45gn\") pod \"17af40df-932c-4a21-8463-5d60c1335003\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149266 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-scripts\") pod \"805f003b-0500-42d6-9516-373ca8ec2c6a\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149305 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data-custom\") pod \"805f003b-0500-42d6-9516-373ca8ec2c6a\" (UID: \"805f003b-0500-42d6-9516-373ca8ec2c6a\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149326 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-config\") pod \"17af40df-932c-4a21-8463-5d60c1335003\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149398 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-nb\") pod \"17af40df-932c-4a21-8463-5d60c1335003\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.149431 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-swift-storage-0\") pod \"17af40df-932c-4a21-8463-5d60c1335003\" (UID: \"17af40df-932c-4a21-8463-5d60c1335003\") " Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.152451 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/805f003b-0500-42d6-9516-373ca8ec2c6a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "805f003b-0500-42d6-9516-373ca8ec2c6a" (UID: "805f003b-0500-42d6-9516-373ca8ec2c6a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.172120 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805f003b-0500-42d6-9516-373ca8ec2c6a-kube-api-access-rpgr2" (OuterVolumeSpecName: "kube-api-access-rpgr2") pod "805f003b-0500-42d6-9516-373ca8ec2c6a" (UID: "805f003b-0500-42d6-9516-373ca8ec2c6a"). InnerVolumeSpecName "kube-api-access-rpgr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.174274 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "805f003b-0500-42d6-9516-373ca8ec2c6a" (UID: "805f003b-0500-42d6-9516-373ca8ec2c6a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.216482 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17af40df-932c-4a21-8463-5d60c1335003-kube-api-access-s45gn" (OuterVolumeSpecName: "kube-api-access-s45gn") pod "17af40df-932c-4a21-8463-5d60c1335003" (UID: "17af40df-932c-4a21-8463-5d60c1335003"). InnerVolumeSpecName "kube-api-access-s45gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.223288 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-scripts" (OuterVolumeSpecName: "scripts") pod "805f003b-0500-42d6-9516-373ca8ec2c6a" (UID: "805f003b-0500-42d6-9516-373ca8ec2c6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.253005 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/805f003b-0500-42d6-9516-373ca8ec2c6a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.253037 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpgr2\" (UniqueName: \"kubernetes.io/projected/805f003b-0500-42d6-9516-373ca8ec2c6a-kube-api-access-rpgr2\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.253051 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s45gn\" (UniqueName: \"kubernetes.io/projected/17af40df-932c-4a21-8463-5d60c1335003-kube-api-access-s45gn\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.253059 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.253069 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.259372 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-config" (OuterVolumeSpecName: "config") pod "17af40df-932c-4a21-8463-5d60c1335003" (UID: "17af40df-932c-4a21-8463-5d60c1335003"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.276996 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17af40df-932c-4a21-8463-5d60c1335003" (UID: "17af40df-932c-4a21-8463-5d60c1335003"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.297249 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17af40df-932c-4a21-8463-5d60c1335003" (UID: "17af40df-932c-4a21-8463-5d60c1335003"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.299473 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17af40df-932c-4a21-8463-5d60c1335003" (UID: "17af40df-932c-4a21-8463-5d60c1335003"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.327987 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "805f003b-0500-42d6-9516-373ca8ec2c6a" (UID: "805f003b-0500-42d6-9516-373ca8ec2c6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.339719 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17af40df-932c-4a21-8463-5d60c1335003" (UID: "17af40df-932c-4a21-8463-5d60c1335003"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.354518 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.354554 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.354563 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.354571 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.354583 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.354593 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af40df-932c-4a21-8463-5d60c1335003-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.371959 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data" (OuterVolumeSpecName: "config-data") pod "805f003b-0500-42d6-9516-373ca8ec2c6a" (UID: "805f003b-0500-42d6-9516-373ca8ec2c6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.456613 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805f003b-0500-42d6-9516-373ca8ec2c6a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.817091 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerStarted","Data":"ceb95d5be95b284b95b64796cda3f5010ec28ff99e5660ff012e07eef2eec2c1"} Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.817168 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerStarted","Data":"5ca4261e72fcec6bf04b7352a8324d260ba1e5c484138fc53256c93778ea63cc"} Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.820637 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d494cf4b-vwwg5" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.821373 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d494cf4b-vwwg5" event={"ID":"2f2e269a-8b4e-4255-ac69-daea1cb7f8a9","Type":"ContainerDied","Data":"7f7adf3c859ad161e6429329b10921ff10beacd9c014b3cd67ea7dae3d61a573"} Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.821409 4770 scope.go:117] "RemoveContainer" containerID="828a4c59703a982ff503a156e8431849d0dfcfa0cbaed445dcc0c74758099231" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.832133 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"805f003b-0500-42d6-9516-373ca8ec2c6a","Type":"ContainerDied","Data":"1b1b19f9bf778bb9126f33a07fd149d5508353c1741e72612c82e98fa6a7edf6"} Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.832192 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.844459 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.844455 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9mw5m" event={"ID":"17af40df-932c-4a21-8463-5d60c1335003","Type":"ContainerDied","Data":"221c08b3b40e4b74c40757dd323f7704bbc279f5a807d05a1bc23e57828e3551"} Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.850827 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"d2b8d36f-0bd8-4ac8-b673-15f5728d0a78","Type":"ContainerStarted","Data":"11793404f0ec8b7c775d50cbd22e7bf444c6d84970ed65ed6baf243ae260e33b"} Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.851655 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.881175 4770 scope.go:117] "RemoveContainer" containerID="28aef3b71a4a991157044dc1ebc1590d7f595842db0dd61ac5bd1cff4a1e8910" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.883150 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d494cf4b-vwwg5"] Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.910271 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7d494cf4b-vwwg5"] Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.918861 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=12.918841766 podStartE2EDuration="12.918841766s" podCreationTimestamp="2025-12-09 14:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:30.881097867 +0000 UTC m=+1542.777300003" watchObservedRunningTime="2025-12-09 14:48:30.918841766 +0000 UTC m=+1542.815043902" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.926741 4770 scope.go:117] "RemoveContainer" containerID="ee02a5bcfd0093fc864b110d003babe02fd582af6d5cb7f23bf85a1a56f9251b" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.932897 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.942851 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.959202 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9mw5m"] Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.997117 4770 scope.go:117] "RemoveContainer" containerID="38577eba0975ea27662768e990c96a787c65deee4f4697b0969a735ced08e206" Dec 09 14:48:30 crc kubenswrapper[4770]: I1209 14:48:30.997689 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9mw5m"] Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.007454 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:31 crc kubenswrapper[4770]: E1209 14:48:31.007940 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.007958 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api" Dec 09 14:48:31 crc kubenswrapper[4770]: E1209 14:48:31.007972 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerName="probe" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.007979 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerName="probe" Dec 09 14:48:31 crc kubenswrapper[4770]: E1209 14:48:31.007988 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api-log" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.007996 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api-log" Dec 09 14:48:31 crc kubenswrapper[4770]: E1209 14:48:31.008005 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerName="cinder-scheduler" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.008010 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerName="cinder-scheduler" Dec 09 14:48:31 crc kubenswrapper[4770]: E1209 14:48:31.008036 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17af40df-932c-4a21-8463-5d60c1335003" containerName="init" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.008041 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="17af40df-932c-4a21-8463-5d60c1335003" containerName="init" Dec 09 14:48:31 crc kubenswrapper[4770]: E1209 14:48:31.008059 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17af40df-932c-4a21-8463-5d60c1335003" containerName="dnsmasq-dns" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.008066 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="17af40df-932c-4a21-8463-5d60c1335003" containerName="dnsmasq-dns" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.008243 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerName="cinder-scheduler" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.008253 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api-log" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.008263 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" containerName="probe" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.008272 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" containerName="barbican-api" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.008287 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="17af40df-932c-4a21-8463-5d60c1335003" containerName="dnsmasq-dns" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.009919 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.012515 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.016000 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.052026 4770 scope.go:117] "RemoveContainer" containerID="eb82728c68a6ad911d750f0b3dbba376cc511c9d7e41717998f951aadf851a33" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.103055 4770 scope.go:117] "RemoveContainer" containerID="03a34229ca2a22292286d9e3f4bc30e6398d55522b5cac71af8ef2866a11df39" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.169507 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.169668 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrd7c\" (UniqueName: \"kubernetes.io/projected/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-kube-api-access-vrd7c\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.169708 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.169889 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-config-data\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.169973 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.170027 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-scripts\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.271666 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrd7c\" (UniqueName: \"kubernetes.io/projected/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-kube-api-access-vrd7c\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.271746 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.271821 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-config-data\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.271858 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.271892 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-scripts\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.271990 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.272404 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.277559 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.278085 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-scripts\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.279451 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-config-data\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.291635 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.299294 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrd7c\" (UniqueName: \"kubernetes.io/projected/37fe0132-33c4-4bf9-98bb-43ae4b9c7902-kube-api-access-vrd7c\") pod \"cinder-scheduler-0\" (UID: \"37fe0132-33c4-4bf9-98bb-43ae4b9c7902\") " pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.325415 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.845777 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.883601 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.883914 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" containerName="glance-log" containerID="cri-o://c60512b3560f588f98178612636d378b976d61e05636a7623ae8cc6dfc86f5bf" gracePeriod=30 Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.884781 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" containerName="glance-httpd" containerID="cri-o://530fb3e85f04971560ba128056414898ddab3197b8a20779ebae00a282a940e3" gracePeriod=30 Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.888435 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerStarted","Data":"0029b1fece3700acee77a04eefb0fecaf6db3128672e24d564b17139e53b3556"} Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.903472 4770 generic.go:334] "Generic (PLEG): container finished" podID="ba146650-9074-423d-aa8f-9cded3a49030" containerID="329e6d43c176fdcd2138c8fc42c34bbe2d96bae5abcfab163ed371ccc8f5d9c4" exitCode=0 Dec 09 14:48:31 crc kubenswrapper[4770]: I1209 14:48:31.903564 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb5d9f9bd-qssdr" event={"ID":"ba146650-9074-423d-aa8f-9cded3a49030","Type":"ContainerDied","Data":"329e6d43c176fdcd2138c8fc42c34bbe2d96bae5abcfab163ed371ccc8f5d9c4"} Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.474492 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.527356 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-config\") pod \"ba146650-9074-423d-aa8f-9cded3a49030\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.527457 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-combined-ca-bundle\") pod \"ba146650-9074-423d-aa8f-9cded3a49030\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.527650 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-httpd-config\") pod \"ba146650-9074-423d-aa8f-9cded3a49030\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.527677 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-ovndb-tls-certs\") pod \"ba146650-9074-423d-aa8f-9cded3a49030\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.528350 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7bk5\" (UniqueName: \"kubernetes.io/projected/ba146650-9074-423d-aa8f-9cded3a49030-kube-api-access-h7bk5\") pod \"ba146650-9074-423d-aa8f-9cded3a49030\" (UID: \"ba146650-9074-423d-aa8f-9cded3a49030\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.533107 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ba146650-9074-423d-aa8f-9cded3a49030" (UID: "ba146650-9074-423d-aa8f-9cded3a49030"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.538002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba146650-9074-423d-aa8f-9cded3a49030-kube-api-access-h7bk5" (OuterVolumeSpecName: "kube-api-access-h7bk5") pod "ba146650-9074-423d-aa8f-9cded3a49030" (UID: "ba146650-9074-423d-aa8f-9cded3a49030"). InnerVolumeSpecName "kube-api-access-h7bk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.608362 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17af40df-932c-4a21-8463-5d60c1335003" path="/var/lib/kubelet/pods/17af40df-932c-4a21-8463-5d60c1335003/volumes" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.618788 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2e269a-8b4e-4255-ac69-daea1cb7f8a9" path="/var/lib/kubelet/pods/2f2e269a-8b4e-4255-ac69-daea1cb7f8a9/volumes" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.619651 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805f003b-0500-42d6-9516-373ca8ec2c6a" path="/var/lib/kubelet/pods/805f003b-0500-42d6-9516-373ca8ec2c6a/volumes" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.634168 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.634204 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7bk5\" (UniqueName: \"kubernetes.io/projected/ba146650-9074-423d-aa8f-9cded3a49030-kube-api-access-h7bk5\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.665343 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba146650-9074-423d-aa8f-9cded3a49030" (UID: "ba146650-9074-423d-aa8f-9cded3a49030"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.681942 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ba146650-9074-423d-aa8f-9cded3a49030" (UID: "ba146650-9074-423d-aa8f-9cded3a49030"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.685213 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-config" (OuterVolumeSpecName: "config") pod "ba146650-9074-423d-aa8f-9cded3a49030" (UID: "ba146650-9074-423d-aa8f-9cded3a49030"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.736051 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.736080 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.736092 4770 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba146650-9074-423d-aa8f-9cded3a49030-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.924309 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.941007 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb5d9f9bd-qssdr" event={"ID":"ba146650-9074-423d-aa8f-9cded3a49030","Type":"ContainerDied","Data":"ee2fec114e2e0140435fe3218cfdc4c51458015d52b612d28ef4eb68043d4642"} Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.941060 4770 scope.go:117] "RemoveContainer" containerID="5d8674029ae744bd48620e8d7354335e652bcddaf85cf30b436b0516857a1007" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.941204 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb5d9f9bd-qssdr" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.956223 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-logs\") pod \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.956285 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-httpd-run\") pod \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.956325 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-scripts\") pod \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.956375 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-combined-ca-bundle\") pod \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.956420 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89bhp\" (UniqueName: \"kubernetes.io/projected/91d4d708-1c18-4827-92eb-349b5eaf6d2f-kube-api-access-89bhp\") pod \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.956442 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-public-tls-certs\") pod \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.956458 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-config-data\") pod \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.957328 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\" (UID: \"91d4d708-1c18-4827-92eb-349b5eaf6d2f\") " Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.965081 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-scripts" (OuterVolumeSpecName: "scripts") pod "91d4d708-1c18-4827-92eb-349b5eaf6d2f" (UID: "91d4d708-1c18-4827-92eb-349b5eaf6d2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.965444 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-logs" (OuterVolumeSpecName: "logs") pod "91d4d708-1c18-4827-92eb-349b5eaf6d2f" (UID: "91d4d708-1c18-4827-92eb-349b5eaf6d2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.965619 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "91d4d708-1c18-4827-92eb-349b5eaf6d2f" (UID: "91d4d708-1c18-4827-92eb-349b5eaf6d2f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.966557 4770 generic.go:334] "Generic (PLEG): container finished" podID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerID="437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b" exitCode=0 Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.966669 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91d4d708-1c18-4827-92eb-349b5eaf6d2f","Type":"ContainerDied","Data":"437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b"} Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.966701 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91d4d708-1c18-4827-92eb-349b5eaf6d2f","Type":"ContainerDied","Data":"897d30b8a1b34203472ecdc88c3229913b2ad99533004a64c4b34719b10927b6"} Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.966801 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.987607 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d4d708-1c18-4827-92eb-349b5eaf6d2f-kube-api-access-89bhp" (OuterVolumeSpecName: "kube-api-access-89bhp") pod "91d4d708-1c18-4827-92eb-349b5eaf6d2f" (UID: "91d4d708-1c18-4827-92eb-349b5eaf6d2f"). InnerVolumeSpecName "kube-api-access-89bhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.996055 4770 generic.go:334] "Generic (PLEG): container finished" podID="f628655f-ba3b-450b-8426-a5acfabd2759" containerID="c60512b3560f588f98178612636d378b976d61e05636a7623ae8cc6dfc86f5bf" exitCode=143 Dec 09 14:48:32 crc kubenswrapper[4770]: I1209 14:48:32.996203 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f628655f-ba3b-450b-8426-a5acfabd2759","Type":"ContainerDied","Data":"c60512b3560f588f98178612636d378b976d61e05636a7623ae8cc6dfc86f5bf"} Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.007942 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658" (OuterVolumeSpecName: "glance") pod "91d4d708-1c18-4827-92eb-349b5eaf6d2f" (UID: "91d4d708-1c18-4827-92eb-349b5eaf6d2f"). InnerVolumeSpecName "pvc-6ef0c204-ea23-4577-89b0-b667aeced658". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.016147 4770 scope.go:117] "RemoveContainer" containerID="329e6d43c176fdcd2138c8fc42c34bbe2d96bae5abcfab163ed371ccc8f5d9c4" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.018067 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37fe0132-33c4-4bf9-98bb-43ae4b9c7902","Type":"ContainerStarted","Data":"2723b72e2dc24b48ae5a04a84194b1da80a98ad3ecfc4b541667ca4cd8c30972"} Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.018102 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37fe0132-33c4-4bf9-98bb-43ae4b9c7902","Type":"ContainerStarted","Data":"05292db6ad5bf3e637df5864c580bec7cf84d1653392ea0ccfe2a4c3b39847a8"} Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.020421 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91d4d708-1c18-4827-92eb-349b5eaf6d2f" (UID: "91d4d708-1c18-4827-92eb-349b5eaf6d2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.068316 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.073554 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d4d708-1c18-4827-92eb-349b5eaf6d2f-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.073580 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.073634 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.073659 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89bhp\" (UniqueName: \"kubernetes.io/projected/91d4d708-1c18-4827-92eb-349b5eaf6d2f-kube-api-access-89bhp\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.073773 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") on node \"crc\" " Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.082659 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cb5d9f9bd-qssdr"] Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.102294 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "91d4d708-1c18-4827-92eb-349b5eaf6d2f" (UID: "91d4d708-1c18-4827-92eb-349b5eaf6d2f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.142993 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cb5d9f9bd-qssdr"] Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.157587 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.158175 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ef0c204-ea23-4577-89b0-b667aeced658" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658") on node "crc" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.164696 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-config-data" (OuterVolumeSpecName: "config-data") pod "91d4d708-1c18-4827-92eb-349b5eaf6d2f" (UID: "91d4d708-1c18-4827-92eb-349b5eaf6d2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.176676 4770 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.176716 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4d708-1c18-4827-92eb-349b5eaf6d2f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.176820 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.236928 4770 scope.go:117] "RemoveContainer" containerID="437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.274614 4770 scope.go:117] "RemoveContainer" containerID="cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.326252 4770 scope.go:117] "RemoveContainer" containerID="437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.334872 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.347362 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:48:33 crc kubenswrapper[4770]: E1209 14:48:33.351586 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b\": container with ID starting with 437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b not found: ID does not exist" containerID="437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.351631 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b"} err="failed to get container status \"437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b\": rpc error: code = NotFound desc = could not find container \"437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b\": container with ID starting with 437b93622847df8c2b0462e9f375e5692136ac4a08007cae804bae5ec5e9b41b not found: ID does not exist" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.351657 4770 scope.go:117] "RemoveContainer" containerID="cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.369676 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:48:33 crc kubenswrapper[4770]: E1209 14:48:33.370145 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba146650-9074-423d-aa8f-9cded3a49030" containerName="neutron-api" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.370159 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba146650-9074-423d-aa8f-9cded3a49030" containerName="neutron-api" Dec 09 14:48:33 crc kubenswrapper[4770]: E1209 14:48:33.370174 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerName="glance-httpd" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.370180 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerName="glance-httpd" Dec 09 14:48:33 crc kubenswrapper[4770]: E1209 14:48:33.370203 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba146650-9074-423d-aa8f-9cded3a49030" containerName="neutron-httpd" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.370210 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba146650-9074-423d-aa8f-9cded3a49030" containerName="neutron-httpd" Dec 09 14:48:33 crc kubenswrapper[4770]: E1209 14:48:33.370226 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerName="glance-log" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.370232 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerName="glance-log" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.370424 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerName="glance-httpd" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.370439 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" containerName="glance-log" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.370461 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba146650-9074-423d-aa8f-9cded3a49030" containerName="neutron-api" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.370469 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba146650-9074-423d-aa8f-9cded3a49030" containerName="neutron-httpd" Dec 09 14:48:33 crc kubenswrapper[4770]: E1209 14:48:33.371292 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c\": container with ID starting with cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c not found: ID does not exist" containerID="cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.371412 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c"} err="failed to get container status \"cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c\": rpc error: code = NotFound desc = could not find container \"cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c\": container with ID starting with cc1f5d8eacda190fac4aeea541998a37a792aa5699cc95c75b0089bb2f2b3c4c not found: ID does not exist" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.371629 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.375524 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.375553 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.380175 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.483532 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.483587 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntsn\" (UniqueName: \"kubernetes.io/projected/0271ab27-dfdb-4223-b3f3-fc82c2024a02-kube-api-access-xntsn\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.483609 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0271ab27-dfdb-4223-b3f3-fc82c2024a02-logs\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.483628 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.483759 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-config-data\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.483845 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-scripts\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.484014 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.484075 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0271ab27-dfdb-4223-b3f3-fc82c2024a02-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.586544 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.586596 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntsn\" (UniqueName: \"kubernetes.io/projected/0271ab27-dfdb-4223-b3f3-fc82c2024a02-kube-api-access-xntsn\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.586612 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0271ab27-dfdb-4223-b3f3-fc82c2024a02-logs\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.586631 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.586661 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-config-data\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.586686 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-scripts\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.586751 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.586777 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0271ab27-dfdb-4223-b3f3-fc82c2024a02-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.588509 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0271ab27-dfdb-4223-b3f3-fc82c2024a02-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.589251 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0271ab27-dfdb-4223-b3f3-fc82c2024a02-logs\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.591115 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-scripts\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.591599 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-config-data\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.591806 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.595687 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.595754 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4d3471d01ab9f49c6d7e7ed17e2f2c5045d7b7d89c846663711f4829da0cc6bb/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.608531 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0271ab27-dfdb-4223-b3f3-fc82c2024a02-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.616751 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntsn\" (UniqueName: \"kubernetes.io/projected/0271ab27-dfdb-4223-b3f3-fc82c2024a02-kube-api-access-xntsn\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.667932 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ef0c204-ea23-4577-89b0-b667aeced658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef0c204-ea23-4577-89b0-b667aeced658\") pod \"glance-default-external-api-0\" (UID: \"0271ab27-dfdb-4223-b3f3-fc82c2024a02\") " pod="openstack/glance-default-external-api-0" Dec 09 14:48:33 crc kubenswrapper[4770]: I1209 14:48:33.720499 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 14:48:34 crc kubenswrapper[4770]: I1209 14:48:34.324279 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 14:48:34 crc kubenswrapper[4770]: I1209 14:48:34.602918 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d4d708-1c18-4827-92eb-349b5eaf6d2f" path="/var/lib/kubelet/pods/91d4d708-1c18-4827-92eb-349b5eaf6d2f/volumes" Dec 09 14:48:34 crc kubenswrapper[4770]: I1209 14:48:34.605369 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba146650-9074-423d-aa8f-9cded3a49030" path="/var/lib/kubelet/pods/ba146650-9074-423d-aa8f-9cded3a49030/volumes" Dec 09 14:48:35 crc kubenswrapper[4770]: I1209 14:48:35.062540 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37fe0132-33c4-4bf9-98bb-43ae4b9c7902","Type":"ContainerStarted","Data":"cd556ddca00a20a4fcce4d4ba85cbafc17ccd243df930a2b443b5447e666b6ba"} Dec 09 14:48:35 crc kubenswrapper[4770]: I1209 14:48:35.064658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0271ab27-dfdb-4223-b3f3-fc82c2024a02","Type":"ContainerStarted","Data":"ba0cbca3f2a2f44da2d1a3cb8a7e935568fdeb5c5a6e856ac5b9342a2c7f30cd"} Dec 09 14:48:35 crc kubenswrapper[4770]: I1209 14:48:35.064723 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0271ab27-dfdb-4223-b3f3-fc82c2024a02","Type":"ContainerStarted","Data":"a11491664107b5a55a3ace4813b67c8d0eec6ade4f15e2c096d31db04caf3b3e"} Dec 09 14:48:35 crc kubenswrapper[4770]: I1209 14:48:35.100677 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.100657949 podStartE2EDuration="5.100657949s" podCreationTimestamp="2025-12-09 14:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:35.087540855 +0000 UTC m=+1546.983743011" watchObservedRunningTime="2025-12-09 14:48:35.100657949 +0000 UTC m=+1546.996860075" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.080710 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0271ab27-dfdb-4223-b3f3-fc82c2024a02","Type":"ContainerStarted","Data":"0a58960e707a97e63650f053ee5ed7020399c1c6f99a37f07890aa442f9ca284"} Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.086949 4770 generic.go:334] "Generic (PLEG): container finished" podID="f628655f-ba3b-450b-8426-a5acfabd2759" containerID="530fb3e85f04971560ba128056414898ddab3197b8a20779ebae00a282a940e3" exitCode=0 Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.087360 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f628655f-ba3b-450b-8426-a5acfabd2759","Type":"ContainerDied","Data":"530fb3e85f04971560ba128056414898ddab3197b8a20779ebae00a282a940e3"} Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.113424 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.11340761 podStartE2EDuration="3.11340761s" podCreationTimestamp="2025-12-09 14:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:36.112256807 +0000 UTC m=+1548.008458943" watchObservedRunningTime="2025-12-09 14:48:36.11340761 +0000 UTC m=+1548.009609746" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.177022 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.256038 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-httpd-run\") pod \"f628655f-ba3b-450b-8426-a5acfabd2759\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.256354 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"f628655f-ba3b-450b-8426-a5acfabd2759\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.256409 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-config-data\") pod \"f628655f-ba3b-450b-8426-a5acfabd2759\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.256493 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-internal-tls-certs\") pod \"f628655f-ba3b-450b-8426-a5acfabd2759\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.256614 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-logs\") pod \"f628655f-ba3b-450b-8426-a5acfabd2759\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.256653 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-combined-ca-bundle\") pod \"f628655f-ba3b-450b-8426-a5acfabd2759\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.257843 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2w8x\" (UniqueName: \"kubernetes.io/projected/f628655f-ba3b-450b-8426-a5acfabd2759-kube-api-access-f2w8x\") pod \"f628655f-ba3b-450b-8426-a5acfabd2759\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.257935 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-scripts\") pod \"f628655f-ba3b-450b-8426-a5acfabd2759\" (UID: \"f628655f-ba3b-450b-8426-a5acfabd2759\") " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.258168 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f628655f-ba3b-450b-8426-a5acfabd2759" (UID: "f628655f-ba3b-450b-8426-a5acfabd2759"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.258064 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-logs" (OuterVolumeSpecName: "logs") pod "f628655f-ba3b-450b-8426-a5acfabd2759" (UID: "f628655f-ba3b-450b-8426-a5acfabd2759"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.278217 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-scripts" (OuterVolumeSpecName: "scripts") pod "f628655f-ba3b-450b-8426-a5acfabd2759" (UID: "f628655f-ba3b-450b-8426-a5acfabd2759"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.279284 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f628655f-ba3b-450b-8426-a5acfabd2759-kube-api-access-f2w8x" (OuterVolumeSpecName: "kube-api-access-f2w8x") pod "f628655f-ba3b-450b-8426-a5acfabd2759" (UID: "f628655f-ba3b-450b-8426-a5acfabd2759"). InnerVolumeSpecName "kube-api-access-f2w8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.279612 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.279765 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f628655f-ba3b-450b-8426-a5acfabd2759-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.279846 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2w8x\" (UniqueName: \"kubernetes.io/projected/f628655f-ba3b-450b-8426-a5acfabd2759-kube-api-access-f2w8x\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.279923 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.303406 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c" (OuterVolumeSpecName: "glance") pod "f628655f-ba3b-450b-8426-a5acfabd2759" (UID: "f628655f-ba3b-450b-8426-a5acfabd2759"). InnerVolumeSpecName "pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.326409 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.340858 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f628655f-ba3b-450b-8426-a5acfabd2759" (UID: "f628655f-ba3b-450b-8426-a5acfabd2759"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.382121 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f628655f-ba3b-450b-8426-a5acfabd2759" (UID: "f628655f-ba3b-450b-8426-a5acfabd2759"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.382166 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") on node \"crc\" " Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.382207 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.412651 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.413088 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c") on node "crc" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.413719 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-config-data" (OuterVolumeSpecName: "config-data") pod "f628655f-ba3b-450b-8426-a5acfabd2759" (UID: "f628655f-ba3b-450b-8426-a5acfabd2759"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.483763 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.483803 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:36 crc kubenswrapper[4770]: I1209 14:48:36.483818 4770 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f628655f-ba3b-450b-8426-a5acfabd2759-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.098523 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f628655f-ba3b-450b-8426-a5acfabd2759","Type":"ContainerDied","Data":"a9f41010ad0a5f7b8562bdacb6bd326163cfc44d091bd916e5d32708c77f0a78"} Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.098970 4770 scope.go:117] "RemoveContainer" containerID="530fb3e85f04971560ba128056414898ddab3197b8a20779ebae00a282a940e3" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.098670 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.133495 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.147181 4770 scope.go:117] "RemoveContainer" containerID="c60512b3560f588f98178612636d378b976d61e05636a7623ae8cc6dfc86f5bf" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.162775 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.174770 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:48:37 crc kubenswrapper[4770]: E1209 14:48:37.175223 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" containerName="glance-log" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.175239 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" containerName="glance-log" Dec 09 14:48:37 crc kubenswrapper[4770]: E1209 14:48:37.175279 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" containerName="glance-httpd" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.175286 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" containerName="glance-httpd" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.175493 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" containerName="glance-httpd" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.175511 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" containerName="glance-log" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.177978 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.182202 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.182304 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.202468 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.306117 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.306210 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd518e4-fbae-471d-8a84-930c03f57a61-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.306247 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.306289 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.306316 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bd518e4-fbae-471d-8a84-930c03f57a61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.306371 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbbq6\" (UniqueName: \"kubernetes.io/projected/6bd518e4-fbae-471d-8a84-930c03f57a61-kube-api-access-hbbq6\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.306413 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.306454 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.408603 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbbq6\" (UniqueName: \"kubernetes.io/projected/6bd518e4-fbae-471d-8a84-930c03f57a61-kube-api-access-hbbq6\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.408897 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.408936 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.408975 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.409029 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd518e4-fbae-471d-8a84-930c03f57a61-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.409060 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.409093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.409118 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bd518e4-fbae-471d-8a84-930c03f57a61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.409700 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bd518e4-fbae-471d-8a84-930c03f57a61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.409745 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd518e4-fbae-471d-8a84-930c03f57a61-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.414453 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.415077 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.415550 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.417601 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd518e4-fbae-471d-8a84-930c03f57a61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.421331 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.421368 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6216abc33ee4323cfd7a1b0602636549bc5b0a652afc01bbb6b33ecbaa1f00d0/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.431883 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbbq6\" (UniqueName: \"kubernetes.io/projected/6bd518e4-fbae-471d-8a84-930c03f57a61-kube-api-access-hbbq6\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.469195 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5378c3e-5e1b-47aa-8067-9c7ac2e30e8c\") pod \"glance-default-internal-api-0\" (UID: \"6bd518e4-fbae-471d-8a84-930c03f57a61\") " pod="openstack/glance-default-internal-api-0" Dec 09 14:48:37 crc kubenswrapper[4770]: I1209 14:48:37.508445 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.083598 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 14:48:38 crc kubenswrapper[4770]: W1209 14:48:38.095892 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bd518e4_fbae_471d_8a84_930c03f57a61.slice/crio-6dc4086afc51769de175236c178cf0dbd70f9302437cd4690a1d71ef63b4c1f3 WatchSource:0}: Error finding container 6dc4086afc51769de175236c178cf0dbd70f9302437cd4690a1d71ef63b4c1f3: Status 404 returned error can't find the container with id 6dc4086afc51769de175236c178cf0dbd70f9302437cd4690a1d71ef63b4c1f3 Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.138169 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd518e4-fbae-471d-8a84-930c03f57a61","Type":"ContainerStarted","Data":"6dc4086afc51769de175236c178cf0dbd70f9302437cd4690a1d71ef63b4c1f3"} Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.166235 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerStarted","Data":"48e2f6772a1746d1f0c12f24a3fe27dfa6c8705e16e39243dfdbe58c521da8d6"} Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.166375 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.166402 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="ceilometer-central-agent" containerID="cri-o://5ca4261e72fcec6bf04b7352a8324d260ba1e5c484138fc53256c93778ea63cc" gracePeriod=30 Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.166565 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="proxy-httpd" containerID="cri-o://48e2f6772a1746d1f0c12f24a3fe27dfa6c8705e16e39243dfdbe58c521da8d6" gracePeriod=30 Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.166606 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="sg-core" containerID="cri-o://0029b1fece3700acee77a04eefb0fecaf6db3128672e24d564b17139e53b3556" gracePeriod=30 Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.166647 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="ceilometer-notification-agent" containerID="cri-o://ceb95d5be95b284b95b64796cda3f5010ec28ff99e5660ff012e07eef2eec2c1" gracePeriod=30 Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.194969 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.436126811 podStartE2EDuration="20.19494578s" podCreationTimestamp="2025-12-09 14:48:18 +0000 UTC" firstStartedPulling="2025-12-09 14:48:19.52646591 +0000 UTC m=+1531.422668046" lastFinishedPulling="2025-12-09 14:48:37.285284879 +0000 UTC m=+1549.181487015" observedRunningTime="2025-12-09 14:48:38.193701505 +0000 UTC m=+1550.089903661" watchObservedRunningTime="2025-12-09 14:48:38.19494578 +0000 UTC m=+1550.091147926" Dec 09 14:48:38 crc kubenswrapper[4770]: I1209 14:48:38.614740 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f628655f-ba3b-450b-8426-a5acfabd2759" path="/var/lib/kubelet/pods/f628655f-ba3b-450b-8426-a5acfabd2759/volumes" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.204117 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6644d11-e035-4b8f-919f-698a54dd3983" containerID="48e2f6772a1746d1f0c12f24a3fe27dfa6c8705e16e39243dfdbe58c521da8d6" exitCode=0 Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.204424 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6644d11-e035-4b8f-919f-698a54dd3983" containerID="0029b1fece3700acee77a04eefb0fecaf6db3128672e24d564b17139e53b3556" exitCode=2 Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.204434 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6644d11-e035-4b8f-919f-698a54dd3983" containerID="ceb95d5be95b284b95b64796cda3f5010ec28ff99e5660ff012e07eef2eec2c1" exitCode=0 Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.204442 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6644d11-e035-4b8f-919f-698a54dd3983" containerID="5ca4261e72fcec6bf04b7352a8324d260ba1e5c484138fc53256c93778ea63cc" exitCode=0 Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.204210 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerDied","Data":"48e2f6772a1746d1f0c12f24a3fe27dfa6c8705e16e39243dfdbe58c521da8d6"} Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.204514 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerDied","Data":"0029b1fece3700acee77a04eefb0fecaf6db3128672e24d564b17139e53b3556"} Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.204528 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerDied","Data":"ceb95d5be95b284b95b64796cda3f5010ec28ff99e5660ff012e07eef2eec2c1"} Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.204538 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerDied","Data":"5ca4261e72fcec6bf04b7352a8324d260ba1e5c484138fc53256c93778ea63cc"} Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.208346 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd518e4-fbae-471d-8a84-930c03f57a61","Type":"ContainerStarted","Data":"98552659b53e9281698c4eca709dbdd8f3981e992e5808cf443fae5df5a6d32f"} Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.209519 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.350226 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmq7w\" (UniqueName: \"kubernetes.io/projected/a6644d11-e035-4b8f-919f-698a54dd3983-kube-api-access-mmq7w\") pod \"a6644d11-e035-4b8f-919f-698a54dd3983\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.350923 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-config-data\") pod \"a6644d11-e035-4b8f-919f-698a54dd3983\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.351015 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-sg-core-conf-yaml\") pod \"a6644d11-e035-4b8f-919f-698a54dd3983\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.351057 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-combined-ca-bundle\") pod \"a6644d11-e035-4b8f-919f-698a54dd3983\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.351121 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-run-httpd\") pod \"a6644d11-e035-4b8f-919f-698a54dd3983\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.351179 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-scripts\") pod \"a6644d11-e035-4b8f-919f-698a54dd3983\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.351212 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-log-httpd\") pod \"a6644d11-e035-4b8f-919f-698a54dd3983\" (UID: \"a6644d11-e035-4b8f-919f-698a54dd3983\") " Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.351607 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a6644d11-e035-4b8f-919f-698a54dd3983" (UID: "a6644d11-e035-4b8f-919f-698a54dd3983"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.352272 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.352676 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a6644d11-e035-4b8f-919f-698a54dd3983" (UID: "a6644d11-e035-4b8f-919f-698a54dd3983"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.362216 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-scripts" (OuterVolumeSpecName: "scripts") pod "a6644d11-e035-4b8f-919f-698a54dd3983" (UID: "a6644d11-e035-4b8f-919f-698a54dd3983"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.371011 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6644d11-e035-4b8f-919f-698a54dd3983-kube-api-access-mmq7w" (OuterVolumeSpecName: "kube-api-access-mmq7w") pod "a6644d11-e035-4b8f-919f-698a54dd3983" (UID: "a6644d11-e035-4b8f-919f-698a54dd3983"). InnerVolumeSpecName "kube-api-access-mmq7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.389665 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a6644d11-e035-4b8f-919f-698a54dd3983" (UID: "a6644d11-e035-4b8f-919f-698a54dd3983"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.451446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6644d11-e035-4b8f-919f-698a54dd3983" (UID: "a6644d11-e035-4b8f-919f-698a54dd3983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.453961 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmq7w\" (UniqueName: \"kubernetes.io/projected/a6644d11-e035-4b8f-919f-698a54dd3983-kube-api-access-mmq7w\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.454003 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.454017 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.454030 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.454041 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6644d11-e035-4b8f-919f-698a54dd3983-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.492568 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-config-data" (OuterVolumeSpecName: "config-data") pod "a6644d11-e035-4b8f-919f-698a54dd3983" (UID: "a6644d11-e035-4b8f-919f-698a54dd3983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:39 crc kubenswrapper[4770]: I1209 14:48:39.558384 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6644d11-e035-4b8f-919f-698a54dd3983-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.221793 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.222141 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6644d11-e035-4b8f-919f-698a54dd3983","Type":"ContainerDied","Data":"b18e29adc31593903aed53142c97a2f5e771dc62508d183051b7b738cd9c9e1d"} Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.222338 4770 scope.go:117] "RemoveContainer" containerID="48e2f6772a1746d1f0c12f24a3fe27dfa6c8705e16e39243dfdbe58c521da8d6" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.224829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd518e4-fbae-471d-8a84-930c03f57a61","Type":"ContainerStarted","Data":"38c3765c1769070b96da14a7630901b9032441d1d5c8a1846c22a95542aeed5c"} Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.246019 4770 scope.go:117] "RemoveContainer" containerID="0029b1fece3700acee77a04eefb0fecaf6db3128672e24d564b17139e53b3556" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.260934 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.260906126 podStartE2EDuration="3.260906126s" podCreationTimestamp="2025-12-09 14:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:40.250795675 +0000 UTC m=+1552.146997841" watchObservedRunningTime="2025-12-09 14:48:40.260906126 +0000 UTC m=+1552.157108262" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.279672 4770 scope.go:117] "RemoveContainer" containerID="ceb95d5be95b284b95b64796cda3f5010ec28ff99e5660ff012e07eef2eec2c1" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.295606 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.313555 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.322087 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:40 crc kubenswrapper[4770]: E1209 14:48:40.322634 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="sg-core" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.322655 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="sg-core" Dec 09 14:48:40 crc kubenswrapper[4770]: E1209 14:48:40.322704 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="ceilometer-notification-agent" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.322713 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="ceilometer-notification-agent" Dec 09 14:48:40 crc kubenswrapper[4770]: E1209 14:48:40.322817 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="proxy-httpd" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.322830 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="proxy-httpd" Dec 09 14:48:40 crc kubenswrapper[4770]: E1209 14:48:40.322849 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="ceilometer-central-agent" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.322857 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="ceilometer-central-agent" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.323100 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="sg-core" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.323137 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="ceilometer-notification-agent" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.323149 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="proxy-httpd" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.323186 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" containerName="ceilometer-central-agent" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.325835 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.328899 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.329643 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.336127 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.340166 4770 scope.go:117] "RemoveContainer" containerID="5ca4261e72fcec6bf04b7352a8324d260ba1e5c484138fc53256c93778ea63cc" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.374575 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-scripts\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.374662 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.374687 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-log-httpd\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.374754 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-config-data\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.374809 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-run-httpd\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.374915 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pclcv\" (UniqueName: \"kubernetes.io/projected/be8aafa6-5ec8-4c5f-885f-d4342943827a-kube-api-access-pclcv\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.374981 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.476548 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pclcv\" (UniqueName: \"kubernetes.io/projected/be8aafa6-5ec8-4c5f-885f-d4342943827a-kube-api-access-pclcv\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.476638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.476694 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-scripts\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.476756 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.476774 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-log-httpd\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.476818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-config-data\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.476851 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-run-httpd\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.477438 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-log-httpd\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.477464 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-run-httpd\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.482751 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-config-data\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.483198 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.484130 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-scripts\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.487608 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.497356 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pclcv\" (UniqueName: \"kubernetes.io/projected/be8aafa6-5ec8-4c5f-885f-d4342943827a-kube-api-access-pclcv\") pod \"ceilometer-0\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " pod="openstack/ceilometer-0" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.600470 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6644d11-e035-4b8f-919f-698a54dd3983" path="/var/lib/kubelet/pods/a6644d11-e035-4b8f-919f-698a54dd3983/volumes" Dec 09 14:48:40 crc kubenswrapper[4770]: I1209 14:48:40.643587 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:41 crc kubenswrapper[4770]: I1209 14:48:41.102920 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:41 crc kubenswrapper[4770]: W1209 14:48:41.109118 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe8aafa6_5ec8_4c5f_885f_d4342943827a.slice/crio-0b92bd54dc0503dfef9f052c8aacb8934e221855e7f6c99cbdcc88d3f5d6d1ca WatchSource:0}: Error finding container 0b92bd54dc0503dfef9f052c8aacb8934e221855e7f6c99cbdcc88d3f5d6d1ca: Status 404 returned error can't find the container with id 0b92bd54dc0503dfef9f052c8aacb8934e221855e7f6c99cbdcc88d3f5d6d1ca Dec 09 14:48:41 crc kubenswrapper[4770]: I1209 14:48:41.238847 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerStarted","Data":"0b92bd54dc0503dfef9f052c8aacb8934e221855e7f6c99cbdcc88d3f5d6d1ca"} Dec 09 14:48:41 crc kubenswrapper[4770]: I1209 14:48:41.519045 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 14:48:43 crc kubenswrapper[4770]: I1209 14:48:43.345395 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerStarted","Data":"f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d"} Dec 09 14:48:43 crc kubenswrapper[4770]: I1209 14:48:43.721798 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 14:48:43 crc kubenswrapper[4770]: I1209 14:48:43.721863 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 14:48:43 crc kubenswrapper[4770]: I1209 14:48:43.758626 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 14:48:43 crc kubenswrapper[4770]: I1209 14:48:43.770939 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 14:48:44 crc kubenswrapper[4770]: I1209 14:48:44.243424 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:48:44 crc kubenswrapper[4770]: I1209 14:48:44.243975 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:48:44 crc kubenswrapper[4770]: I1209 14:48:44.360863 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerStarted","Data":"a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f"} Dec 09 14:48:44 crc kubenswrapper[4770]: I1209 14:48:44.361115 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 14:48:44 crc kubenswrapper[4770]: I1209 14:48:44.361156 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 14:48:45 crc kubenswrapper[4770]: I1209 14:48:45.374899 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerStarted","Data":"7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c"} Dec 09 14:48:45 crc kubenswrapper[4770]: I1209 14:48:45.376838 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e82b342d-8682-4892-836b-6248fcea0d3f","Type":"ContainerStarted","Data":"08a802a1d48eff283c42ce6c195311892382ad497992e23e751e624e93a867cf"} Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.288927 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.702719684 podStartE2EDuration="48.288901566s" podCreationTimestamp="2025-12-09 14:47:58 +0000 UTC" firstStartedPulling="2025-12-09 14:47:59.657718141 +0000 UTC m=+1511.553920277" lastFinishedPulling="2025-12-09 14:48:44.243900023 +0000 UTC m=+1556.140102159" observedRunningTime="2025-12-09 14:48:45.394701416 +0000 UTC m=+1557.290903592" watchObservedRunningTime="2025-12-09 14:48:46.288901566 +0000 UTC m=+1558.185103692" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.297508 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-s92kc"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.299282 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.324233 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s92kc"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.389923 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5700d19d-1648-4915-a6a2-39d8206f438c-operator-scripts\") pod \"nova-api-db-create-s92kc\" (UID: \"5700d19d-1648-4915-a6a2-39d8206f438c\") " pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.390081 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25ktd\" (UniqueName: \"kubernetes.io/projected/5700d19d-1648-4915-a6a2-39d8206f438c-kube-api-access-25ktd\") pod \"nova-api-db-create-s92kc\" (UID: \"5700d19d-1648-4915-a6a2-39d8206f438c\") " pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.393856 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerStarted","Data":"90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8"} Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.394243 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.399092 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6j4pg"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.437869 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.446338 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6j4pg"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.474234 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f887-account-create-update-22fct"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.476192 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.479920 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.490293 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.490411 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.496975 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f887-account-create-update-22fct"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.502643 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5700d19d-1648-4915-a6a2-39d8206f438c-operator-scripts\") pod \"nova-api-db-create-s92kc\" (UID: \"5700d19d-1648-4915-a6a2-39d8206f438c\") " pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.502895 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25ktd\" (UniqueName: \"kubernetes.io/projected/5700d19d-1648-4915-a6a2-39d8206f438c-kube-api-access-25ktd\") pod \"nova-api-db-create-s92kc\" (UID: \"5700d19d-1648-4915-a6a2-39d8206f438c\") " pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.508037 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5700d19d-1648-4915-a6a2-39d8206f438c-operator-scripts\") pod \"nova-api-db-create-s92kc\" (UID: \"5700d19d-1648-4915-a6a2-39d8206f438c\") " pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.508441 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9786660999999999 podStartE2EDuration="6.508389876s" podCreationTimestamp="2025-12-09 14:48:40 +0000 UTC" firstStartedPulling="2025-12-09 14:48:41.111864092 +0000 UTC m=+1553.008066228" lastFinishedPulling="2025-12-09 14:48:45.641587878 +0000 UTC m=+1557.537790004" observedRunningTime="2025-12-09 14:48:46.43416895 +0000 UTC m=+1558.330371086" watchObservedRunningTime="2025-12-09 14:48:46.508389876 +0000 UTC m=+1558.404592032" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.539017 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25ktd\" (UniqueName: \"kubernetes.io/projected/5700d19d-1648-4915-a6a2-39d8206f438c-kube-api-access-25ktd\") pod \"nova-api-db-create-s92kc\" (UID: \"5700d19d-1648-4915-a6a2-39d8206f438c\") " pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.604668 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4b2573-dd09-4e1a-9724-af01d57049ca-operator-scripts\") pod \"nova-api-f887-account-create-update-22fct\" (UID: \"6f4b2573-dd09-4e1a-9724-af01d57049ca\") " pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.604868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5214ff-7385-4380-be7e-1928902df33a-operator-scripts\") pod \"nova-cell0-db-create-6j4pg\" (UID: \"ff5214ff-7385-4380-be7e-1928902df33a\") " pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.604893 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cj59\" (UniqueName: \"kubernetes.io/projected/ff5214ff-7385-4380-be7e-1928902df33a-kube-api-access-6cj59\") pod \"nova-cell0-db-create-6j4pg\" (UID: \"ff5214ff-7385-4380-be7e-1928902df33a\") " pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.604939 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88hj\" (UniqueName: \"kubernetes.io/projected/6f4b2573-dd09-4e1a-9724-af01d57049ca-kube-api-access-c88hj\") pod \"nova-api-f887-account-create-update-22fct\" (UID: \"6f4b2573-dd09-4e1a-9724-af01d57049ca\") " pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.618929 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.632240 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8vx5w"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.634298 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.657777 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8vx5w"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.657896 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.688783 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4266-account-create-update-lc6m7"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.692793 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.696640 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.717640 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p276q\" (UniqueName: \"kubernetes.io/projected/13c4b0e5-c858-457d-ad30-71166591f03f-kube-api-access-p276q\") pod \"nova-cell0-4266-account-create-update-lc6m7\" (UID: \"13c4b0e5-c858-457d-ad30-71166591f03f\") " pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.717700 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4b2573-dd09-4e1a-9724-af01d57049ca-operator-scripts\") pod \"nova-api-f887-account-create-update-22fct\" (UID: \"6f4b2573-dd09-4e1a-9724-af01d57049ca\") " pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.718442 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c4b0e5-c858-457d-ad30-71166591f03f-operator-scripts\") pod \"nova-cell0-4266-account-create-update-lc6m7\" (UID: \"13c4b0e5-c858-457d-ad30-71166591f03f\") " pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.718511 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5214ff-7385-4380-be7e-1928902df33a-operator-scripts\") pod \"nova-cell0-db-create-6j4pg\" (UID: \"ff5214ff-7385-4380-be7e-1928902df33a\") " pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.718536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cj59\" (UniqueName: \"kubernetes.io/projected/ff5214ff-7385-4380-be7e-1928902df33a-kube-api-access-6cj59\") pod \"nova-cell0-db-create-6j4pg\" (UID: \"ff5214ff-7385-4380-be7e-1928902df33a\") " pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.718606 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88hj\" (UniqueName: \"kubernetes.io/projected/6f4b2573-dd09-4e1a-9724-af01d57049ca-kube-api-access-c88hj\") pod \"nova-api-f887-account-create-update-22fct\" (UID: \"6f4b2573-dd09-4e1a-9724-af01d57049ca\") " pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.722644 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5214ff-7385-4380-be7e-1928902df33a-operator-scripts\") pod \"nova-cell0-db-create-6j4pg\" (UID: \"ff5214ff-7385-4380-be7e-1928902df33a\") " pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.723619 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4b2573-dd09-4e1a-9724-af01d57049ca-operator-scripts\") pod \"nova-api-f887-account-create-update-22fct\" (UID: \"6f4b2573-dd09-4e1a-9724-af01d57049ca\") " pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.741758 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cj59\" (UniqueName: \"kubernetes.io/projected/ff5214ff-7385-4380-be7e-1928902df33a-kube-api-access-6cj59\") pod \"nova-cell0-db-create-6j4pg\" (UID: \"ff5214ff-7385-4380-be7e-1928902df33a\") " pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.751742 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4266-account-create-update-lc6m7"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.753552 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88hj\" (UniqueName: \"kubernetes.io/projected/6f4b2573-dd09-4e1a-9724-af01d57049ca-kube-api-access-c88hj\") pod \"nova-api-f887-account-create-update-22fct\" (UID: \"6f4b2573-dd09-4e1a-9724-af01d57049ca\") " pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.768601 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.812425 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.825429 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7q5j\" (UniqueName: \"kubernetes.io/projected/20162029-a6e5-432d-93e1-54d7d9aeed22-kube-api-access-v7q5j\") pod \"nova-cell1-db-create-8vx5w\" (UID: \"20162029-a6e5-432d-93e1-54d7d9aeed22\") " pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.825598 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20162029-a6e5-432d-93e1-54d7d9aeed22-operator-scripts\") pod \"nova-cell1-db-create-8vx5w\" (UID: \"20162029-a6e5-432d-93e1-54d7d9aeed22\") " pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.825712 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p276q\" (UniqueName: \"kubernetes.io/projected/13c4b0e5-c858-457d-ad30-71166591f03f-kube-api-access-p276q\") pod \"nova-cell0-4266-account-create-update-lc6m7\" (UID: \"13c4b0e5-c858-457d-ad30-71166591f03f\") " pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.825896 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c4b0e5-c858-457d-ad30-71166591f03f-operator-scripts\") pod \"nova-cell0-4266-account-create-update-lc6m7\" (UID: \"13c4b0e5-c858-457d-ad30-71166591f03f\") " pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.826844 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c4b0e5-c858-457d-ad30-71166591f03f-operator-scripts\") pod \"nova-cell0-4266-account-create-update-lc6m7\" (UID: \"13c4b0e5-c858-457d-ad30-71166591f03f\") " pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.851342 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p276q\" (UniqueName: \"kubernetes.io/projected/13c4b0e5-c858-457d-ad30-71166591f03f-kube-api-access-p276q\") pod \"nova-cell0-4266-account-create-update-lc6m7\" (UID: \"13c4b0e5-c858-457d-ad30-71166591f03f\") " pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.853821 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7735-account-create-update-5p282"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.859685 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.862226 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.878087 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7735-account-create-update-5p282"] Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.927476 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7q5j\" (UniqueName: \"kubernetes.io/projected/20162029-a6e5-432d-93e1-54d7d9aeed22-kube-api-access-v7q5j\") pod \"nova-cell1-db-create-8vx5w\" (UID: \"20162029-a6e5-432d-93e1-54d7d9aeed22\") " pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.927527 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9qt\" (UniqueName: \"kubernetes.io/projected/d79eb99a-b722-426a-aab8-ff0404972446-kube-api-access-xc9qt\") pod \"nova-cell1-7735-account-create-update-5p282\" (UID: \"d79eb99a-b722-426a-aab8-ff0404972446\") " pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.927575 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d79eb99a-b722-426a-aab8-ff0404972446-operator-scripts\") pod \"nova-cell1-7735-account-create-update-5p282\" (UID: \"d79eb99a-b722-426a-aab8-ff0404972446\") " pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.927600 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20162029-a6e5-432d-93e1-54d7d9aeed22-operator-scripts\") pod \"nova-cell1-db-create-8vx5w\" (UID: \"20162029-a6e5-432d-93e1-54d7d9aeed22\") " pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.929433 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20162029-a6e5-432d-93e1-54d7d9aeed22-operator-scripts\") pod \"nova-cell1-db-create-8vx5w\" (UID: \"20162029-a6e5-432d-93e1-54d7d9aeed22\") " pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.948784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7q5j\" (UniqueName: \"kubernetes.io/projected/20162029-a6e5-432d-93e1-54d7d9aeed22-kube-api-access-v7q5j\") pod \"nova-cell1-db-create-8vx5w\" (UID: \"20162029-a6e5-432d-93e1-54d7d9aeed22\") " pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:46 crc kubenswrapper[4770]: I1209 14:48:46.998469 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.018520 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.032271 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc9qt\" (UniqueName: \"kubernetes.io/projected/d79eb99a-b722-426a-aab8-ff0404972446-kube-api-access-xc9qt\") pod \"nova-cell1-7735-account-create-update-5p282\" (UID: \"d79eb99a-b722-426a-aab8-ff0404972446\") " pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.032378 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d79eb99a-b722-426a-aab8-ff0404972446-operator-scripts\") pod \"nova-cell1-7735-account-create-update-5p282\" (UID: \"d79eb99a-b722-426a-aab8-ff0404972446\") " pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.033465 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d79eb99a-b722-426a-aab8-ff0404972446-operator-scripts\") pod \"nova-cell1-7735-account-create-update-5p282\" (UID: \"d79eb99a-b722-426a-aab8-ff0404972446\") " pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.054479 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc9qt\" (UniqueName: \"kubernetes.io/projected/d79eb99a-b722-426a-aab8-ff0404972446-kube-api-access-xc9qt\") pod \"nova-cell1-7735-account-create-update-5p282\" (UID: \"d79eb99a-b722-426a-aab8-ff0404972446\") " pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.320853 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s92kc"] Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.356095 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.427505 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s92kc" event={"ID":"5700d19d-1648-4915-a6a2-39d8206f438c","Type":"ContainerStarted","Data":"9bb0ac37c6a13ed3d40f394c2a0a523da63817bbc4686018944c9d63f2453613"} Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.503902 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f887-account-create-update-22fct"] Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.511002 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.511049 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.530792 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6j4pg"] Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.686695 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.779118 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:47 crc kubenswrapper[4770]: I1209 14:48:47.970683 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4266-account-create-update-lc6m7"] Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.003103 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8vx5w"] Dec 09 14:48:48 crc kubenswrapper[4770]: W1209 14:48:48.009292 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20162029_a6e5_432d_93e1_54d7d9aeed22.slice/crio-d0322246ca9cabcb7a207a5bc0c206874c9ae666ee6323aa847924ddc5b80f90 WatchSource:0}: Error finding container d0322246ca9cabcb7a207a5bc0c206874c9ae666ee6323aa847924ddc5b80f90: Status 404 returned error can't find the container with id d0322246ca9cabcb7a207a5bc0c206874c9ae666ee6323aa847924ddc5b80f90 Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.300198 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7735-account-create-update-5p282"] Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.441065 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7735-account-create-update-5p282" event={"ID":"d79eb99a-b722-426a-aab8-ff0404972446","Type":"ContainerStarted","Data":"08c5714f77daad4f702be2cb717dea4cab47430dd0ea97ed43668ba892c0ce4c"} Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.450078 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f887-account-create-update-22fct" event={"ID":"6f4b2573-dd09-4e1a-9724-af01d57049ca","Type":"ContainerStarted","Data":"31244015d95e86cf3a0ff943ee3cdaaa825225f962900da68b7f7a8e05d875ff"} Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.450132 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f887-account-create-update-22fct" event={"ID":"6f4b2573-dd09-4e1a-9724-af01d57049ca","Type":"ContainerStarted","Data":"c09aa7ac02e783a90433588cb7b9bfb96efcc5ffc67d40de72d89daade2e1858"} Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.454215 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" event={"ID":"13c4b0e5-c858-457d-ad30-71166591f03f","Type":"ContainerStarted","Data":"a74bf5420f3561b1a5790cfd015309d44c01d7b790e312823c213b2e0c71c490"} Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.459714 4770 generic.go:334] "Generic (PLEG): container finished" podID="5700d19d-1648-4915-a6a2-39d8206f438c" containerID="9d256ddf3d4add255dde76abadecdd5e04206d525ec019c04e3a1508bbcc96bd" exitCode=0 Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.459952 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s92kc" event={"ID":"5700d19d-1648-4915-a6a2-39d8206f438c","Type":"ContainerDied","Data":"9d256ddf3d4add255dde76abadecdd5e04206d525ec019c04e3a1508bbcc96bd"} Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.467165 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6j4pg" event={"ID":"ff5214ff-7385-4380-be7e-1928902df33a","Type":"ContainerStarted","Data":"2b28c4a760dd7a697fe72b42796607fe0e3c4adfbdf1ed1491a16240a6341c14"} Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.467218 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6j4pg" event={"ID":"ff5214ff-7385-4380-be7e-1928902df33a","Type":"ContainerStarted","Data":"2e70a3a1b6a287650c0bc7bea55c67fc9507a504c91c36097d4cd2a545f01c8e"} Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.487204 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8vx5w" event={"ID":"20162029-a6e5-432d-93e1-54d7d9aeed22","Type":"ContainerStarted","Data":"d0322246ca9cabcb7a207a5bc0c206874c9ae666ee6323aa847924ddc5b80f90"} Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.487676 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.487715 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.493471 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-f887-account-create-update-22fct" podStartSLOduration=2.493447891 podStartE2EDuration="2.493447891s" podCreationTimestamp="2025-12-09 14:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:48.476984412 +0000 UTC m=+1560.373186548" watchObservedRunningTime="2025-12-09 14:48:48.493447891 +0000 UTC m=+1560.389650057" Dec 09 14:48:48 crc kubenswrapper[4770]: I1209 14:48:48.596438 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-6j4pg" podStartSLOduration=2.5964152069999997 podStartE2EDuration="2.596415207s" podCreationTimestamp="2025-12-09 14:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:48.501495035 +0000 UTC m=+1560.397697171" watchObservedRunningTime="2025-12-09 14:48:48.596415207 +0000 UTC m=+1560.492617343" Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.545265 4770 generic.go:334] "Generic (PLEG): container finished" podID="6f4b2573-dd09-4e1a-9724-af01d57049ca" containerID="31244015d95e86cf3a0ff943ee3cdaaa825225f962900da68b7f7a8e05d875ff" exitCode=0 Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.545635 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f887-account-create-update-22fct" event={"ID":"6f4b2573-dd09-4e1a-9724-af01d57049ca","Type":"ContainerDied","Data":"31244015d95e86cf3a0ff943ee3cdaaa825225f962900da68b7f7a8e05d875ff"} Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.559948 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" event={"ID":"13c4b0e5-c858-457d-ad30-71166591f03f","Type":"ContainerStarted","Data":"a6b7832d633e1b84ae204cc50e3dc40e5b7dac8b935f9aca182835b63a5713e2"} Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.563861 4770 generic.go:334] "Generic (PLEG): container finished" podID="ff5214ff-7385-4380-be7e-1928902df33a" containerID="2b28c4a760dd7a697fe72b42796607fe0e3c4adfbdf1ed1491a16240a6341c14" exitCode=0 Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.563949 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6j4pg" event={"ID":"ff5214ff-7385-4380-be7e-1928902df33a","Type":"ContainerDied","Data":"2b28c4a760dd7a697fe72b42796607fe0e3c4adfbdf1ed1491a16240a6341c14"} Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.573165 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8vx5w" event={"ID":"20162029-a6e5-432d-93e1-54d7d9aeed22","Type":"ContainerStarted","Data":"83efbc95987ae6603ab00b88f9ac384854c2d318fbb55261d926363a7b2630f5"} Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.602078 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7735-account-create-update-5p282" event={"ID":"d79eb99a-b722-426a-aab8-ff0404972446","Type":"ContainerStarted","Data":"c1d4af286b992f1d650444bce28aff09e3d4c7631473b64bc3e6f29d3185fef5"} Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.624746 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" podStartSLOduration=3.62471188 podStartE2EDuration="3.62471188s" podCreationTimestamp="2025-12-09 14:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:49.622853308 +0000 UTC m=+1561.519055454" watchObservedRunningTime="2025-12-09 14:48:49.62471188 +0000 UTC m=+1561.520914016" Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.669560 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-8vx5w" podStartSLOduration=3.669537087 podStartE2EDuration="3.669537087s" podCreationTimestamp="2025-12-09 14:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:49.658599813 +0000 UTC m=+1561.554801949" watchObservedRunningTime="2025-12-09 14:48:49.669537087 +0000 UTC m=+1561.565739223" Dec 09 14:48:49 crc kubenswrapper[4770]: I1209 14:48:49.707441 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7735-account-create-update-5p282" podStartSLOduration=3.707418961 podStartE2EDuration="3.707418961s" podCreationTimestamp="2025-12-09 14:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:49.687665272 +0000 UTC m=+1561.583867408" watchObservedRunningTime="2025-12-09 14:48:49.707418961 +0000 UTC m=+1561.603621117" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.191211 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.377545 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5700d19d-1648-4915-a6a2-39d8206f438c-operator-scripts\") pod \"5700d19d-1648-4915-a6a2-39d8206f438c\" (UID: \"5700d19d-1648-4915-a6a2-39d8206f438c\") " Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.377652 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25ktd\" (UniqueName: \"kubernetes.io/projected/5700d19d-1648-4915-a6a2-39d8206f438c-kube-api-access-25ktd\") pod \"5700d19d-1648-4915-a6a2-39d8206f438c\" (UID: \"5700d19d-1648-4915-a6a2-39d8206f438c\") " Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.378688 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5700d19d-1648-4915-a6a2-39d8206f438c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5700d19d-1648-4915-a6a2-39d8206f438c" (UID: "5700d19d-1648-4915-a6a2-39d8206f438c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.384615 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5700d19d-1648-4915-a6a2-39d8206f438c-kube-api-access-25ktd" (OuterVolumeSpecName: "kube-api-access-25ktd") pod "5700d19d-1648-4915-a6a2-39d8206f438c" (UID: "5700d19d-1648-4915-a6a2-39d8206f438c"). InnerVolumeSpecName "kube-api-access-25ktd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.481185 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5700d19d-1648-4915-a6a2-39d8206f438c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.481245 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25ktd\" (UniqueName: \"kubernetes.io/projected/5700d19d-1648-4915-a6a2-39d8206f438c-kube-api-access-25ktd\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.538568 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.616320 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s92kc" event={"ID":"5700d19d-1648-4915-a6a2-39d8206f438c","Type":"ContainerDied","Data":"9bb0ac37c6a13ed3d40f394c2a0a523da63817bbc4686018944c9d63f2453613"} Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.616359 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb0ac37c6a13ed3d40f394c2a0a523da63817bbc4686018944c9d63f2453613" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.616416 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s92kc" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.619189 4770 generic.go:334] "Generic (PLEG): container finished" podID="8608e197-8cbc-4c1a-ac37-648bdb076ebe" containerID="e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680" exitCode=137 Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.619369 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8608e197-8cbc-4c1a-ac37-648bdb076ebe","Type":"ContainerDied","Data":"e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680"} Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.619459 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8608e197-8cbc-4c1a-ac37-648bdb076ebe","Type":"ContainerDied","Data":"f8c732f9846b8f3efe5b944cc08f712c8afff58bec403eafd117e392c1ec4dff"} Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.619381 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.619484 4770 scope.go:117] "RemoveContainer" containerID="e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.626666 4770 generic.go:334] "Generic (PLEG): container finished" podID="20162029-a6e5-432d-93e1-54d7d9aeed22" containerID="83efbc95987ae6603ab00b88f9ac384854c2d318fbb55261d926363a7b2630f5" exitCode=0 Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.626798 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8vx5w" event={"ID":"20162029-a6e5-432d-93e1-54d7d9aeed22","Type":"ContainerDied","Data":"83efbc95987ae6603ab00b88f9ac384854c2d318fbb55261d926363a7b2630f5"} Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.632278 4770 generic.go:334] "Generic (PLEG): container finished" podID="d79eb99a-b722-426a-aab8-ff0404972446" containerID="c1d4af286b992f1d650444bce28aff09e3d4c7631473b64bc3e6f29d3185fef5" exitCode=0 Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.632378 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7735-account-create-update-5p282" event={"ID":"d79eb99a-b722-426a-aab8-ff0404972446","Type":"ContainerDied","Data":"c1d4af286b992f1d650444bce28aff09e3d4c7631473b64bc3e6f29d3185fef5"} Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.640652 4770 generic.go:334] "Generic (PLEG): container finished" podID="13c4b0e5-c858-457d-ad30-71166591f03f" containerID="a6b7832d633e1b84ae204cc50e3dc40e5b7dac8b935f9aca182835b63a5713e2" exitCode=0 Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.640711 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" event={"ID":"13c4b0e5-c858-457d-ad30-71166591f03f","Type":"ContainerDied","Data":"a6b7832d633e1b84ae204cc50e3dc40e5b7dac8b935f9aca182835b63a5713e2"} Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.640840 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.640852 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.665786 4770 scope.go:117] "RemoveContainer" containerID="e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680" Dec 09 14:48:50 crc kubenswrapper[4770]: E1209 14:48:50.666551 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680\": container with ID starting with e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680 not found: ID does not exist" containerID="e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.666583 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680"} err="failed to get container status \"e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680\": rpc error: code = NotFound desc = could not find container \"e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680\": container with ID starting with e4d730a17d7bff657fa33acc5cedaf832665771145aeb5043cdaedc5bcda4680 not found: ID does not exist" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.688508 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data-custom\") pod \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.688613 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-combined-ca-bundle\") pod \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.688637 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data\") pod \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.688700 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-certs\") pod \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.689480 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47r5s\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-kube-api-access-47r5s\") pod \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.689588 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-scripts\") pod \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\" (UID: \"8608e197-8cbc-4c1a-ac37-648bdb076ebe\") " Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.698736 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-kube-api-access-47r5s" (OuterVolumeSpecName: "kube-api-access-47r5s") pod "8608e197-8cbc-4c1a-ac37-648bdb076ebe" (UID: "8608e197-8cbc-4c1a-ac37-648bdb076ebe"). InnerVolumeSpecName "kube-api-access-47r5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.703170 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8608e197-8cbc-4c1a-ac37-648bdb076ebe" (UID: "8608e197-8cbc-4c1a-ac37-648bdb076ebe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.707879 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-scripts" (OuterVolumeSpecName: "scripts") pod "8608e197-8cbc-4c1a-ac37-648bdb076ebe" (UID: "8608e197-8cbc-4c1a-ac37-648bdb076ebe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.713008 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-certs" (OuterVolumeSpecName: "certs") pod "8608e197-8cbc-4c1a-ac37-648bdb076ebe" (UID: "8608e197-8cbc-4c1a-ac37-648bdb076ebe"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.741378 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data" (OuterVolumeSpecName: "config-data") pod "8608e197-8cbc-4c1a-ac37-648bdb076ebe" (UID: "8608e197-8cbc-4c1a-ac37-648bdb076ebe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.749901 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8608e197-8cbc-4c1a-ac37-648bdb076ebe" (UID: "8608e197-8cbc-4c1a-ac37-648bdb076ebe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.798509 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.798541 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.798553 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.798561 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8608e197-8cbc-4c1a-ac37-648bdb076ebe-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.798569 4770 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:50 crc kubenswrapper[4770]: I1209 14:48:50.798578 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47r5s\" (UniqueName: \"kubernetes.io/projected/8608e197-8cbc-4c1a-ac37-648bdb076ebe-kube-api-access-47r5s\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.024358 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.038252 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.052355 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:51 crc kubenswrapper[4770]: E1209 14:48:51.053884 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5700d19d-1648-4915-a6a2-39d8206f438c" containerName="mariadb-database-create" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.053944 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5700d19d-1648-4915-a6a2-39d8206f438c" containerName="mariadb-database-create" Dec 09 14:48:51 crc kubenswrapper[4770]: E1209 14:48:51.053974 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8608e197-8cbc-4c1a-ac37-648bdb076ebe" containerName="cloudkitty-proc" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.053980 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8608e197-8cbc-4c1a-ac37-648bdb076ebe" containerName="cloudkitty-proc" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.054176 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8608e197-8cbc-4c1a-ac37-648bdb076ebe" containerName="cloudkitty-proc" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.054196 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5700d19d-1648-4915-a6a2-39d8206f438c" containerName="mariadb-database-create" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.055270 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.057218 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.063036 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.107927 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.108461 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="ceilometer-central-agent" containerID="cri-o://f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d" gracePeriod=30 Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.109082 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="proxy-httpd" containerID="cri-o://90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8" gracePeriod=30 Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.109660 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="ceilometer-notification-agent" containerID="cri-o://a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f" gracePeriod=30 Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.109688 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="sg-core" containerID="cri-o://7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c" gracePeriod=30 Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.149637 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.220276 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cj59\" (UniqueName: \"kubernetes.io/projected/ff5214ff-7385-4380-be7e-1928902df33a-kube-api-access-6cj59\") pod \"ff5214ff-7385-4380-be7e-1928902df33a\" (UID: \"ff5214ff-7385-4380-be7e-1928902df33a\") " Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.222522 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5214ff-7385-4380-be7e-1928902df33a-operator-scripts\") pod \"ff5214ff-7385-4380-be7e-1928902df33a\" (UID: \"ff5214ff-7385-4380-be7e-1928902df33a\") " Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.223062 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whm88\" (UniqueName: \"kubernetes.io/projected/6088cd85-c911-4bfb-88cf-4c837adf9548-kube-api-access-whm88\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.223111 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6088cd85-c911-4bfb-88cf-4c837adf9548-certs\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.223406 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.223437 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-scripts\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.223470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.223501 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-config-data\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.224121 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5214ff-7385-4380-be7e-1928902df33a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff5214ff-7385-4380-be7e-1928902df33a" (UID: "ff5214ff-7385-4380-be7e-1928902df33a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.254840 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5214ff-7385-4380-be7e-1928902df33a-kube-api-access-6cj59" (OuterVolumeSpecName: "kube-api-access-6cj59") pod "ff5214ff-7385-4380-be7e-1928902df33a" (UID: "ff5214ff-7385-4380-be7e-1928902df33a"). InnerVolumeSpecName "kube-api-access-6cj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.325777 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.325846 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-config-data\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.325971 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whm88\" (UniqueName: \"kubernetes.io/projected/6088cd85-c911-4bfb-88cf-4c837adf9548-kube-api-access-whm88\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.326008 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6088cd85-c911-4bfb-88cf-4c837adf9548-certs\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.326137 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.326160 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-scripts\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.326225 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cj59\" (UniqueName: \"kubernetes.io/projected/ff5214ff-7385-4380-be7e-1928902df33a-kube-api-access-6cj59\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.326241 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5214ff-7385-4380-be7e-1928902df33a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.333425 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.334241 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-scripts\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.335551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6088cd85-c911-4bfb-88cf-4c837adf9548-certs\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.336451 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-config-data\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.337941 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6088cd85-c911-4bfb-88cf-4c837adf9548-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.362313 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whm88\" (UniqueName: \"kubernetes.io/projected/6088cd85-c911-4bfb-88cf-4c837adf9548-kube-api-access-whm88\") pod \"cloudkitty-proc-0\" (UID: \"6088cd85-c911-4bfb-88cf-4c837adf9548\") " pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.428392 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.455345 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.530979 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c88hj\" (UniqueName: \"kubernetes.io/projected/6f4b2573-dd09-4e1a-9724-af01d57049ca-kube-api-access-c88hj\") pod \"6f4b2573-dd09-4e1a-9724-af01d57049ca\" (UID: \"6f4b2573-dd09-4e1a-9724-af01d57049ca\") " Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.531055 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4b2573-dd09-4e1a-9724-af01d57049ca-operator-scripts\") pod \"6f4b2573-dd09-4e1a-9724-af01d57049ca\" (UID: \"6f4b2573-dd09-4e1a-9724-af01d57049ca\") " Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.532136 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f4b2573-dd09-4e1a-9724-af01d57049ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f4b2573-dd09-4e1a-9724-af01d57049ca" (UID: "6f4b2573-dd09-4e1a-9724-af01d57049ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.537297 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4b2573-dd09-4e1a-9724-af01d57049ca-kube-api-access-c88hj" (OuterVolumeSpecName: "kube-api-access-c88hj") pod "6f4b2573-dd09-4e1a-9724-af01d57049ca" (UID: "6f4b2573-dd09-4e1a-9724-af01d57049ca"). InnerVolumeSpecName "kube-api-access-c88hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.644269 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c88hj\" (UniqueName: \"kubernetes.io/projected/6f4b2573-dd09-4e1a-9724-af01d57049ca-kube-api-access-c88hj\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.644573 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4b2573-dd09-4e1a-9724-af01d57049ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.665470 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f887-account-create-update-22fct" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.668813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f887-account-create-update-22fct" event={"ID":"6f4b2573-dd09-4e1a-9724-af01d57049ca","Type":"ContainerDied","Data":"c09aa7ac02e783a90433588cb7b9bfb96efcc5ffc67d40de72d89daade2e1858"} Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.668855 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c09aa7ac02e783a90433588cb7b9bfb96efcc5ffc67d40de72d89daade2e1858" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.674151 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6j4pg" event={"ID":"ff5214ff-7385-4380-be7e-1928902df33a","Type":"ContainerDied","Data":"2e70a3a1b6a287650c0bc7bea55c67fc9507a504c91c36097d4cd2a545f01c8e"} Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.674182 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e70a3a1b6a287650c0bc7bea55c67fc9507a504c91c36097d4cd2a545f01c8e" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.674231 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6j4pg" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.685779 4770 generic.go:334] "Generic (PLEG): container finished" podID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerID="90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8" exitCode=0 Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.685821 4770 generic.go:334] "Generic (PLEG): container finished" podID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerID="7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c" exitCode=2 Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.685834 4770 generic.go:334] "Generic (PLEG): container finished" podID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerID="a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f" exitCode=0 Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.686156 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerDied","Data":"90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8"} Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.688300 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerDied","Data":"7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c"} Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.688337 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerDied","Data":"a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f"} Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.729669 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.729801 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.731203 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 14:48:51 crc kubenswrapper[4770]: I1209 14:48:51.937483 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 14:48:51 crc kubenswrapper[4770]: W1209 14:48:51.950171 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6088cd85_c911_4bfb_88cf_4c837adf9548.slice/crio-fe9a1ae6e6ec154827d1dc0127fa7929295f8386f19cbcda7d9676eeabc4b5d9 WatchSource:0}: Error finding container fe9a1ae6e6ec154827d1dc0127fa7929295f8386f19cbcda7d9676eeabc4b5d9: Status 404 returned error can't find the container with id fe9a1ae6e6ec154827d1dc0127fa7929295f8386f19cbcda7d9676eeabc4b5d9 Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.161435 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.261270 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7q5j\" (UniqueName: \"kubernetes.io/projected/20162029-a6e5-432d-93e1-54d7d9aeed22-kube-api-access-v7q5j\") pod \"20162029-a6e5-432d-93e1-54d7d9aeed22\" (UID: \"20162029-a6e5-432d-93e1-54d7d9aeed22\") " Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.262289 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20162029-a6e5-432d-93e1-54d7d9aeed22-operator-scripts\") pod \"20162029-a6e5-432d-93e1-54d7d9aeed22\" (UID: \"20162029-a6e5-432d-93e1-54d7d9aeed22\") " Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.263801 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20162029-a6e5-432d-93e1-54d7d9aeed22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20162029-a6e5-432d-93e1-54d7d9aeed22" (UID: "20162029-a6e5-432d-93e1-54d7d9aeed22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.282987 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20162029-a6e5-432d-93e1-54d7d9aeed22-kube-api-access-v7q5j" (OuterVolumeSpecName: "kube-api-access-v7q5j") pod "20162029-a6e5-432d-93e1-54d7d9aeed22" (UID: "20162029-a6e5-432d-93e1-54d7d9aeed22"). InnerVolumeSpecName "kube-api-access-v7q5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.366434 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7q5j\" (UniqueName: \"kubernetes.io/projected/20162029-a6e5-432d-93e1-54d7d9aeed22-kube-api-access-v7q5j\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.366468 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20162029-a6e5-432d-93e1-54d7d9aeed22-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.500211 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.504620 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.644713 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8608e197-8cbc-4c1a-ac37-648bdb076ebe" path="/var/lib/kubelet/pods/8608e197-8cbc-4c1a-ac37-648bdb076ebe/volumes" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.677856 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c4b0e5-c858-457d-ad30-71166591f03f-operator-scripts\") pod \"13c4b0e5-c858-457d-ad30-71166591f03f\" (UID: \"13c4b0e5-c858-457d-ad30-71166591f03f\") " Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.678018 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p276q\" (UniqueName: \"kubernetes.io/projected/13c4b0e5-c858-457d-ad30-71166591f03f-kube-api-access-p276q\") pod \"13c4b0e5-c858-457d-ad30-71166591f03f\" (UID: \"13c4b0e5-c858-457d-ad30-71166591f03f\") " Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.678052 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d79eb99a-b722-426a-aab8-ff0404972446-operator-scripts\") pod \"d79eb99a-b722-426a-aab8-ff0404972446\" (UID: \"d79eb99a-b722-426a-aab8-ff0404972446\") " Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.678156 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc9qt\" (UniqueName: \"kubernetes.io/projected/d79eb99a-b722-426a-aab8-ff0404972446-kube-api-access-xc9qt\") pod \"d79eb99a-b722-426a-aab8-ff0404972446\" (UID: \"d79eb99a-b722-426a-aab8-ff0404972446\") " Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.680416 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79eb99a-b722-426a-aab8-ff0404972446-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d79eb99a-b722-426a-aab8-ff0404972446" (UID: "d79eb99a-b722-426a-aab8-ff0404972446"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.681101 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c4b0e5-c858-457d-ad30-71166591f03f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13c4b0e5-c858-457d-ad30-71166591f03f" (UID: "13c4b0e5-c858-457d-ad30-71166591f03f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.682596 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79eb99a-b722-426a-aab8-ff0404972446-kube-api-access-xc9qt" (OuterVolumeSpecName: "kube-api-access-xc9qt") pod "d79eb99a-b722-426a-aab8-ff0404972446" (UID: "d79eb99a-b722-426a-aab8-ff0404972446"). InnerVolumeSpecName "kube-api-access-xc9qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.699018 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c4b0e5-c858-457d-ad30-71166591f03f-kube-api-access-p276q" (OuterVolumeSpecName: "kube-api-access-p276q") pod "13c4b0e5-c858-457d-ad30-71166591f03f" (UID: "13c4b0e5-c858-457d-ad30-71166591f03f"). InnerVolumeSpecName "kube-api-access-p276q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.709877 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" event={"ID":"13c4b0e5-c858-457d-ad30-71166591f03f","Type":"ContainerDied","Data":"a74bf5420f3561b1a5790cfd015309d44c01d7b790e312823c213b2e0c71c490"} Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.709919 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74bf5420f3561b1a5790cfd015309d44c01d7b790e312823c213b2e0c71c490" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.709992 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4266-account-create-update-lc6m7" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.714138 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"6088cd85-c911-4bfb-88cf-4c837adf9548","Type":"ContainerStarted","Data":"81eafb06296490df079c8859a0c78489c5e4a5c4cbf7f40833fdf4ef8bf4601f"} Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.714179 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"6088cd85-c911-4bfb-88cf-4c837adf9548","Type":"ContainerStarted","Data":"fe9a1ae6e6ec154827d1dc0127fa7929295f8386f19cbcda7d9676eeabc4b5d9"} Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.716341 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8vx5w" event={"ID":"20162029-a6e5-432d-93e1-54d7d9aeed22","Type":"ContainerDied","Data":"d0322246ca9cabcb7a207a5bc0c206874c9ae666ee6323aa847924ddc5b80f90"} Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.716384 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0322246ca9cabcb7a207a5bc0c206874c9ae666ee6323aa847924ddc5b80f90" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.716466 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8vx5w" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.730003 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7735-account-create-update-5p282" event={"ID":"d79eb99a-b722-426a-aab8-ff0404972446","Type":"ContainerDied","Data":"08c5714f77daad4f702be2cb717dea4cab47430dd0ea97ed43668ba892c0ce4c"} Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.730053 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08c5714f77daad4f702be2cb717dea4cab47430dd0ea97ed43668ba892c0ce4c" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.730067 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7735-account-create-update-5p282" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.743540 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.743517932 podStartE2EDuration="2.743517932s" podCreationTimestamp="2025-12-09 14:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:48:52.740996792 +0000 UTC m=+1564.637198928" watchObservedRunningTime="2025-12-09 14:48:52.743517932 +0000 UTC m=+1564.639720078" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.780546 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c4b0e5-c858-457d-ad30-71166591f03f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.780579 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p276q\" (UniqueName: \"kubernetes.io/projected/13c4b0e5-c858-457d-ad30-71166591f03f-kube-api-access-p276q\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.780590 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d79eb99a-b722-426a-aab8-ff0404972446-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:52 crc kubenswrapper[4770]: I1209 14:48:52.780599 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc9qt\" (UniqueName: \"kubernetes.io/projected/d79eb99a-b722-426a-aab8-ff0404972446-kube-api-access-xc9qt\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.996710 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-94lv6"] Dec 09 14:48:56 crc kubenswrapper[4770]: E1209 14:48:56.998436 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c4b0e5-c858-457d-ad30-71166591f03f" containerName="mariadb-account-create-update" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.998483 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c4b0e5-c858-457d-ad30-71166591f03f" containerName="mariadb-account-create-update" Dec 09 14:48:56 crc kubenswrapper[4770]: E1209 14:48:56.998518 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20162029-a6e5-432d-93e1-54d7d9aeed22" containerName="mariadb-database-create" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.998529 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="20162029-a6e5-432d-93e1-54d7d9aeed22" containerName="mariadb-database-create" Dec 09 14:48:56 crc kubenswrapper[4770]: E1209 14:48:56.998563 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79eb99a-b722-426a-aab8-ff0404972446" containerName="mariadb-account-create-update" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.998572 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79eb99a-b722-426a-aab8-ff0404972446" containerName="mariadb-account-create-update" Dec 09 14:48:56 crc kubenswrapper[4770]: E1209 14:48:56.998588 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4b2573-dd09-4e1a-9724-af01d57049ca" containerName="mariadb-account-create-update" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.998595 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4b2573-dd09-4e1a-9724-af01d57049ca" containerName="mariadb-account-create-update" Dec 09 14:48:56 crc kubenswrapper[4770]: E1209 14:48:56.998609 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5214ff-7385-4380-be7e-1928902df33a" containerName="mariadb-database-create" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.998616 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5214ff-7385-4380-be7e-1928902df33a" containerName="mariadb-database-create" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.998976 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5214ff-7385-4380-be7e-1928902df33a" containerName="mariadb-database-create" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.998989 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79eb99a-b722-426a-aab8-ff0404972446" containerName="mariadb-account-create-update" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.999001 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c4b0e5-c858-457d-ad30-71166591f03f" containerName="mariadb-account-create-update" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.999025 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="20162029-a6e5-432d-93e1-54d7d9aeed22" containerName="mariadb-database-create" Dec 09 14:48:56 crc kubenswrapper[4770]: I1209 14:48:56.999040 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4b2573-dd09-4e1a-9724-af01d57049ca" containerName="mariadb-account-create-update" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.000373 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.002907 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.003543 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.003677 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vwqct" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.028589 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-94lv6"] Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.139437 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdfr\" (UniqueName: \"kubernetes.io/projected/959155d5-0b58-4005-b2cf-5e2dd53e4f06-kube-api-access-jfdfr\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.139510 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-scripts\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.139566 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-config-data\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.139654 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.243175 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdfr\" (UniqueName: \"kubernetes.io/projected/959155d5-0b58-4005-b2cf-5e2dd53e4f06-kube-api-access-jfdfr\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.243265 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-scripts\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.243320 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-config-data\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.243411 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.252481 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.252531 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-config-data\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.259738 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-scripts\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.261782 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdfr\" (UniqueName: \"kubernetes.io/projected/959155d5-0b58-4005-b2cf-5e2dd53e4f06-kube-api-access-jfdfr\") pod \"nova-cell0-conductor-db-sync-94lv6\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.353344 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:48:57 crc kubenswrapper[4770]: I1209 14:48:57.865076 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-94lv6"] Dec 09 14:48:58 crc kubenswrapper[4770]: I1209 14:48:58.814433 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-94lv6" event={"ID":"959155d5-0b58-4005-b2cf-5e2dd53e4f06","Type":"ContainerStarted","Data":"8879b828d1086b803a1a74781ab9e69c309e12171a3c974faf41e80e07fec272"} Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.731569 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.824275 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-config-data\") pod \"be8aafa6-5ec8-4c5f-885f-d4342943827a\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.824357 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-sg-core-conf-yaml\") pod \"be8aafa6-5ec8-4c5f-885f-d4342943827a\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.824406 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-log-httpd\") pod \"be8aafa6-5ec8-4c5f-885f-d4342943827a\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.824434 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pclcv\" (UniqueName: \"kubernetes.io/projected/be8aafa6-5ec8-4c5f-885f-d4342943827a-kube-api-access-pclcv\") pod \"be8aafa6-5ec8-4c5f-885f-d4342943827a\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.824452 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-run-httpd\") pod \"be8aafa6-5ec8-4c5f-885f-d4342943827a\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.824904 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be8aafa6-5ec8-4c5f-885f-d4342943827a" (UID: "be8aafa6-5ec8-4c5f-885f-d4342943827a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.825007 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be8aafa6-5ec8-4c5f-885f-d4342943827a" (UID: "be8aafa6-5ec8-4c5f-885f-d4342943827a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.828802 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-scripts\") pod \"be8aafa6-5ec8-4c5f-885f-d4342943827a\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.829071 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-combined-ca-bundle\") pod \"be8aafa6-5ec8-4c5f-885f-d4342943827a\" (UID: \"be8aafa6-5ec8-4c5f-885f-d4342943827a\") " Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.829825 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.829841 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be8aafa6-5ec8-4c5f-885f-d4342943827a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.831061 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be8aafa6-5ec8-4c5f-885f-d4342943827a-kube-api-access-pclcv" (OuterVolumeSpecName: "kube-api-access-pclcv") pod "be8aafa6-5ec8-4c5f-885f-d4342943827a" (UID: "be8aafa6-5ec8-4c5f-885f-d4342943827a"). InnerVolumeSpecName "kube-api-access-pclcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.834368 4770 generic.go:334] "Generic (PLEG): container finished" podID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerID="f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d" exitCode=0 Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.834416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerDied","Data":"f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d"} Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.834452 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be8aafa6-5ec8-4c5f-885f-d4342943827a","Type":"ContainerDied","Data":"0b92bd54dc0503dfef9f052c8aacb8934e221855e7f6c99cbdcc88d3f5d6d1ca"} Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.834514 4770 scope.go:117] "RemoveContainer" containerID="90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.834686 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.840473 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-scripts" (OuterVolumeSpecName: "scripts") pod "be8aafa6-5ec8-4c5f-885f-d4342943827a" (UID: "be8aafa6-5ec8-4c5f-885f-d4342943827a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.865263 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be8aafa6-5ec8-4c5f-885f-d4342943827a" (UID: "be8aafa6-5ec8-4c5f-885f-d4342943827a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.914496 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be8aafa6-5ec8-4c5f-885f-d4342943827a" (UID: "be8aafa6-5ec8-4c5f-885f-d4342943827a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.933549 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.933591 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pclcv\" (UniqueName: \"kubernetes.io/projected/be8aafa6-5ec8-4c5f-885f-d4342943827a-kube-api-access-pclcv\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.933605 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.933616 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:48:59 crc kubenswrapper[4770]: I1209 14:48:59.943561 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-config-data" (OuterVolumeSpecName: "config-data") pod "be8aafa6-5ec8-4c5f-885f-d4342943827a" (UID: "be8aafa6-5ec8-4c5f-885f-d4342943827a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.011255 4770 scope.go:117] "RemoveContainer" containerID="7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.035334 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8aafa6-5ec8-4c5f-885f-d4342943827a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.040654 4770 scope.go:117] "RemoveContainer" containerID="a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.065467 4770 scope.go:117] "RemoveContainer" containerID="f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.106927 4770 scope.go:117] "RemoveContainer" containerID="90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8" Dec 09 14:49:00 crc kubenswrapper[4770]: E1209 14:49:00.107455 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8\": container with ID starting with 90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8 not found: ID does not exist" containerID="90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.107502 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8"} err="failed to get container status \"90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8\": rpc error: code = NotFound desc = could not find container \"90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8\": container with ID starting with 90631cc5c3ffad37cdc58fc3605882dbad3c22039f83bb5ef1ebee4fe4f2e6e8 not found: ID does not exist" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.107522 4770 scope.go:117] "RemoveContainer" containerID="7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c" Dec 09 14:49:00 crc kubenswrapper[4770]: E1209 14:49:00.107806 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c\": container with ID starting with 7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c not found: ID does not exist" containerID="7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.107825 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c"} err="failed to get container status \"7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c\": rpc error: code = NotFound desc = could not find container \"7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c\": container with ID starting with 7d80626531286ef7d86bac2d5d6105f1f2daaa2c0a1a809591ef2a0b8856e42c not found: ID does not exist" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.107839 4770 scope.go:117] "RemoveContainer" containerID="a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f" Dec 09 14:49:00 crc kubenswrapper[4770]: E1209 14:49:00.108428 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f\": container with ID starting with a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f not found: ID does not exist" containerID="a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.108471 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f"} err="failed to get container status \"a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f\": rpc error: code = NotFound desc = could not find container \"a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f\": container with ID starting with a8dcc2b0cf5c942ef0e8c9105dbad847de3dff145c596ec08562bebfcf46734f not found: ID does not exist" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.108486 4770 scope.go:117] "RemoveContainer" containerID="f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d" Dec 09 14:49:00 crc kubenswrapper[4770]: E1209 14:49:00.109150 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d\": container with ID starting with f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d not found: ID does not exist" containerID="f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.109238 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d"} err="failed to get container status \"f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d\": rpc error: code = NotFound desc = could not find container \"f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d\": container with ID starting with f0432ea1dda78c4d229a76d49672e154033825d86ba539af917cc60c14ad9d2d not found: ID does not exist" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.196595 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.216235 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.231798 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:00 crc kubenswrapper[4770]: E1209 14:49:00.232398 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="ceilometer-central-agent" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.232423 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="ceilometer-central-agent" Dec 09 14:49:00 crc kubenswrapper[4770]: E1209 14:49:00.232436 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="ceilometer-notification-agent" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.232444 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="ceilometer-notification-agent" Dec 09 14:49:00 crc kubenswrapper[4770]: E1209 14:49:00.232465 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="proxy-httpd" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.232474 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="proxy-httpd" Dec 09 14:49:00 crc kubenswrapper[4770]: E1209 14:49:00.232484 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="sg-core" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.232491 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="sg-core" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.232754 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="ceilometer-notification-agent" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.232777 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="sg-core" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.232798 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="proxy-httpd" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.232817 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" containerName="ceilometer-central-agent" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.235242 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.239406 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.239611 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.241411 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.339671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.339768 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.339816 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-config-data\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.339854 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4nxz\" (UniqueName: \"kubernetes.io/projected/3e28d164-e612-49e2-a276-1d4d0477dbab-kube-api-access-n4nxz\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.339887 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-scripts\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.339917 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-log-httpd\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.339985 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-run-httpd\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.441319 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-run-httpd\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.441419 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.441466 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.441496 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-config-data\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.441527 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4nxz\" (UniqueName: \"kubernetes.io/projected/3e28d164-e612-49e2-a276-1d4d0477dbab-kube-api-access-n4nxz\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.441550 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-scripts\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.441572 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-log-httpd\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.442066 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-run-httpd\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.442676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-log-httpd\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.447372 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-config-data\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.447742 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.448214 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.456435 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-scripts\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.460192 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4nxz\" (UniqueName: \"kubernetes.io/projected/3e28d164-e612-49e2-a276-1d4d0477dbab-kube-api-access-n4nxz\") pod \"ceilometer-0\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " pod="openstack/ceilometer-0" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.600451 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be8aafa6-5ec8-4c5f-885f-d4342943827a" path="/var/lib/kubelet/pods/be8aafa6-5ec8-4c5f-885f-d4342943827a/volumes" Dec 09 14:49:00 crc kubenswrapper[4770]: I1209 14:49:00.642128 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:49:01 crc kubenswrapper[4770]: I1209 14:49:01.151351 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:01 crc kubenswrapper[4770]: W1209 14:49:01.163794 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e28d164_e612_49e2_a276_1d4d0477dbab.slice/crio-04898a36f73ffdab328817dbfd1be4e215d76b87e8670bc2eeab566b0eb47c6e WatchSource:0}: Error finding container 04898a36f73ffdab328817dbfd1be4e215d76b87e8670bc2eeab566b0eb47c6e: Status 404 returned error can't find the container with id 04898a36f73ffdab328817dbfd1be4e215d76b87e8670bc2eeab566b0eb47c6e Dec 09 14:49:01 crc kubenswrapper[4770]: I1209 14:49:01.879619 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerStarted","Data":"04898a36f73ffdab328817dbfd1be4e215d76b87e8670bc2eeab566b0eb47c6e"} Dec 09 14:49:06 crc kubenswrapper[4770]: I1209 14:49:06.146645 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Dec 09 14:49:07 crc kubenswrapper[4770]: I1209 14:49:07.965005 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerStarted","Data":"eb9cc5341f2c4a08729a0f5e18548acaa156f9300e52e22b2e741c11de27ad5e"} Dec 09 14:49:07 crc kubenswrapper[4770]: I1209 14:49:07.965545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerStarted","Data":"16b2feec53152b35dda82a5b60d098afa14cb5725e64991b482b97778f1ca37d"} Dec 09 14:49:07 crc kubenswrapper[4770]: I1209 14:49:07.967224 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-94lv6" event={"ID":"959155d5-0b58-4005-b2cf-5e2dd53e4f06","Type":"ContainerStarted","Data":"0e6bfc815819f1d161e04d054694241812021fef1409346a3c99fe68253fc865"} Dec 09 14:49:07 crc kubenswrapper[4770]: I1209 14:49:07.990990 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-94lv6" podStartSLOduration=3.026792868 podStartE2EDuration="11.990957328s" podCreationTimestamp="2025-12-09 14:48:56 +0000 UTC" firstStartedPulling="2025-12-09 14:48:57.879903625 +0000 UTC m=+1569.776105761" lastFinishedPulling="2025-12-09 14:49:06.844068085 +0000 UTC m=+1578.740270221" observedRunningTime="2025-12-09 14:49:07.980838926 +0000 UTC m=+1579.877041082" watchObservedRunningTime="2025-12-09 14:49:07.990957328 +0000 UTC m=+1579.887159464" Dec 09 14:49:10 crc kubenswrapper[4770]: I1209 14:49:10.005175 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerStarted","Data":"3065d0bed6a3f90bce9847f1f8bd04b8886e116b390c698e6e310809734b1c27"} Dec 09 14:49:11 crc kubenswrapper[4770]: I1209 14:49:11.021609 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerStarted","Data":"ff2dcee78b796715ac2e5f8c98430544afbc36314bc5ce4e745aebb06b892011"} Dec 09 14:49:11 crc kubenswrapper[4770]: I1209 14:49:11.022250 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:49:11 crc kubenswrapper[4770]: I1209 14:49:11.052687 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.617155552 podStartE2EDuration="11.052667962s" podCreationTimestamp="2025-12-09 14:49:00 +0000 UTC" firstStartedPulling="2025-12-09 14:49:01.165965833 +0000 UTC m=+1573.062167969" lastFinishedPulling="2025-12-09 14:49:10.601478243 +0000 UTC m=+1582.497680379" observedRunningTime="2025-12-09 14:49:11.044812923 +0000 UTC m=+1582.941015069" watchObservedRunningTime="2025-12-09 14:49:11.052667962 +0000 UTC m=+1582.948870098" Dec 09 14:49:14 crc kubenswrapper[4770]: I1209 14:49:14.243975 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:49:14 crc kubenswrapper[4770]: I1209 14:49:14.244562 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:49:14 crc kubenswrapper[4770]: I1209 14:49:14.723972 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:14 crc kubenswrapper[4770]: I1209 14:49:14.724528 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="ceilometer-central-agent" containerID="cri-o://16b2feec53152b35dda82a5b60d098afa14cb5725e64991b482b97778f1ca37d" gracePeriod=30 Dec 09 14:49:14 crc kubenswrapper[4770]: I1209 14:49:14.724574 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="proxy-httpd" containerID="cri-o://ff2dcee78b796715ac2e5f8c98430544afbc36314bc5ce4e745aebb06b892011" gracePeriod=30 Dec 09 14:49:14 crc kubenswrapper[4770]: I1209 14:49:14.724574 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="sg-core" containerID="cri-o://3065d0bed6a3f90bce9847f1f8bd04b8886e116b390c698e6e310809734b1c27" gracePeriod=30 Dec 09 14:49:14 crc kubenswrapper[4770]: I1209 14:49:14.724620 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="ceilometer-notification-agent" containerID="cri-o://eb9cc5341f2c4a08729a0f5e18548acaa156f9300e52e22b2e741c11de27ad5e" gracePeriod=30 Dec 09 14:49:15 crc kubenswrapper[4770]: I1209 14:49:15.144886 4770 generic.go:334] "Generic (PLEG): container finished" podID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerID="ff2dcee78b796715ac2e5f8c98430544afbc36314bc5ce4e745aebb06b892011" exitCode=0 Dec 09 14:49:15 crc kubenswrapper[4770]: I1209 14:49:15.144920 4770 generic.go:334] "Generic (PLEG): container finished" podID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerID="3065d0bed6a3f90bce9847f1f8bd04b8886e116b390c698e6e310809734b1c27" exitCode=2 Dec 09 14:49:15 crc kubenswrapper[4770]: I1209 14:49:15.144940 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerDied","Data":"ff2dcee78b796715ac2e5f8c98430544afbc36314bc5ce4e745aebb06b892011"} Dec 09 14:49:15 crc kubenswrapper[4770]: I1209 14:49:15.144967 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerDied","Data":"3065d0bed6a3f90bce9847f1f8bd04b8886e116b390c698e6e310809734b1c27"} Dec 09 14:49:16 crc kubenswrapper[4770]: I1209 14:49:16.266492 4770 generic.go:334] "Generic (PLEG): container finished" podID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerID="eb9cc5341f2c4a08729a0f5e18548acaa156f9300e52e22b2e741c11de27ad5e" exitCode=0 Dec 09 14:49:16 crc kubenswrapper[4770]: I1209 14:49:16.266571 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerDied","Data":"eb9cc5341f2c4a08729a0f5e18548acaa156f9300e52e22b2e741c11de27ad5e"} Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.292350 4770 generic.go:334] "Generic (PLEG): container finished" podID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerID="16b2feec53152b35dda82a5b60d098afa14cb5725e64991b482b97778f1ca37d" exitCode=0 Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.292415 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerDied","Data":"16b2feec53152b35dda82a5b60d098afa14cb5725e64991b482b97778f1ca37d"} Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.667612 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.792935 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4nxz\" (UniqueName: \"kubernetes.io/projected/3e28d164-e612-49e2-a276-1d4d0477dbab-kube-api-access-n4nxz\") pod \"3e28d164-e612-49e2-a276-1d4d0477dbab\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.793040 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-scripts\") pod \"3e28d164-e612-49e2-a276-1d4d0477dbab\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.793148 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-run-httpd\") pod \"3e28d164-e612-49e2-a276-1d4d0477dbab\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.793272 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-log-httpd\") pod \"3e28d164-e612-49e2-a276-1d4d0477dbab\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.793318 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-sg-core-conf-yaml\") pod \"3e28d164-e612-49e2-a276-1d4d0477dbab\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.793408 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-combined-ca-bundle\") pod \"3e28d164-e612-49e2-a276-1d4d0477dbab\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.793444 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-config-data\") pod \"3e28d164-e612-49e2-a276-1d4d0477dbab\" (UID: \"3e28d164-e612-49e2-a276-1d4d0477dbab\") " Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.793491 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e28d164-e612-49e2-a276-1d4d0477dbab" (UID: "3e28d164-e612-49e2-a276-1d4d0477dbab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.793696 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e28d164-e612-49e2-a276-1d4d0477dbab" (UID: "3e28d164-e612-49e2-a276-1d4d0477dbab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.796068 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.796118 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e28d164-e612-49e2-a276-1d4d0477dbab-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.799091 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-scripts" (OuterVolumeSpecName: "scripts") pod "3e28d164-e612-49e2-a276-1d4d0477dbab" (UID: "3e28d164-e612-49e2-a276-1d4d0477dbab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.799195 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e28d164-e612-49e2-a276-1d4d0477dbab-kube-api-access-n4nxz" (OuterVolumeSpecName: "kube-api-access-n4nxz") pod "3e28d164-e612-49e2-a276-1d4d0477dbab" (UID: "3e28d164-e612-49e2-a276-1d4d0477dbab"). InnerVolumeSpecName "kube-api-access-n4nxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.857239 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3e28d164-e612-49e2-a276-1d4d0477dbab" (UID: "3e28d164-e612-49e2-a276-1d4d0477dbab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.890991 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e28d164-e612-49e2-a276-1d4d0477dbab" (UID: "3e28d164-e612-49e2-a276-1d4d0477dbab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.897145 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.897182 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.897198 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4nxz\" (UniqueName: \"kubernetes.io/projected/3e28d164-e612-49e2-a276-1d4d0477dbab-kube-api-access-n4nxz\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.897211 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.921764 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-config-data" (OuterVolumeSpecName: "config-data") pod "3e28d164-e612-49e2-a276-1d4d0477dbab" (UID: "3e28d164-e612-49e2-a276-1d4d0477dbab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:17 crc kubenswrapper[4770]: I1209 14:49:17.998421 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e28d164-e612-49e2-a276-1d4d0477dbab-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.305640 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e28d164-e612-49e2-a276-1d4d0477dbab","Type":"ContainerDied","Data":"04898a36f73ffdab328817dbfd1be4e215d76b87e8670bc2eeab566b0eb47c6e"} Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.305886 4770 scope.go:117] "RemoveContainer" containerID="ff2dcee78b796715ac2e5f8c98430544afbc36314bc5ce4e745aebb06b892011" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.306019 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.336017 4770 scope.go:117] "RemoveContainer" containerID="3065d0bed6a3f90bce9847f1f8bd04b8886e116b390c698e6e310809734b1c27" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.348451 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.362111 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.374283 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:18 crc kubenswrapper[4770]: E1209 14:49:18.374734 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="ceilometer-notification-agent" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.374755 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="ceilometer-notification-agent" Dec 09 14:49:18 crc kubenswrapper[4770]: E1209 14:49:18.374764 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="proxy-httpd" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.374771 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="proxy-httpd" Dec 09 14:49:18 crc kubenswrapper[4770]: E1209 14:49:18.374785 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="sg-core" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.374791 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="sg-core" Dec 09 14:49:18 crc kubenswrapper[4770]: E1209 14:49:18.374808 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="ceilometer-central-agent" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.374813 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="ceilometer-central-agent" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.375032 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="ceilometer-notification-agent" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.375046 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="sg-core" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.375067 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="proxy-httpd" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.375093 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" containerName="ceilometer-central-agent" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.376804 4770 scope.go:117] "RemoveContainer" containerID="eb9cc5341f2c4a08729a0f5e18548acaa156f9300e52e22b2e741c11de27ad5e" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.378736 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.382187 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.382242 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.403301 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.413455 4770 scope.go:117] "RemoveContainer" containerID="16b2feec53152b35dda82a5b60d098afa14cb5725e64991b482b97778f1ca37d" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.514970 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-log-httpd\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.515018 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-run-httpd\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.515045 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.515066 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-config-data\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.515369 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn6sj\" (UniqueName: \"kubernetes.io/projected/dae509f5-4248-4e67-9d83-9ec678ca0170-kube-api-access-hn6sj\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.515551 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.515624 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-scripts\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.602075 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e28d164-e612-49e2-a276-1d4d0477dbab" path="/var/lib/kubelet/pods/3e28d164-e612-49e2-a276-1d4d0477dbab/volumes" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.617214 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-scripts\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.617325 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-log-httpd\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.617379 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-run-httpd\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.617401 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.617421 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-config-data\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.617798 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn6sj\" (UniqueName: \"kubernetes.io/projected/dae509f5-4248-4e67-9d83-9ec678ca0170-kube-api-access-hn6sj\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.617887 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.618811 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-run-httpd\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.618847 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-log-httpd\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.622481 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.622942 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-config-data\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.623440 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.624303 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-scripts\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.636362 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn6sj\" (UniqueName: \"kubernetes.io/projected/dae509f5-4248-4e67-9d83-9ec678ca0170-kube-api-access-hn6sj\") pod \"ceilometer-0\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " pod="openstack/ceilometer-0" Dec 09 14:49:18 crc kubenswrapper[4770]: I1209 14:49:18.705201 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:49:19 crc kubenswrapper[4770]: I1209 14:49:19.194209 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:19 crc kubenswrapper[4770]: W1209 14:49:19.202865 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddae509f5_4248_4e67_9d83_9ec678ca0170.slice/crio-a3e2698f1515e3cd5f9271a690a9557a0a5c41d7cf7b8cf271dead8ffe5e0a87 WatchSource:0}: Error finding container a3e2698f1515e3cd5f9271a690a9557a0a5c41d7cf7b8cf271dead8ffe5e0a87: Status 404 returned error can't find the container with id a3e2698f1515e3cd5f9271a690a9557a0a5c41d7cf7b8cf271dead8ffe5e0a87 Dec 09 14:49:19 crc kubenswrapper[4770]: I1209 14:49:19.387810 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerStarted","Data":"a3e2698f1515e3cd5f9271a690a9557a0a5c41d7cf7b8cf271dead8ffe5e0a87"} Dec 09 14:49:21 crc kubenswrapper[4770]: I1209 14:49:21.641808 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerStarted","Data":"f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d"} Dec 09 14:49:22 crc kubenswrapper[4770]: I1209 14:49:22.659695 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerStarted","Data":"7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3"} Dec 09 14:49:23 crc kubenswrapper[4770]: I1209 14:49:23.672905 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerStarted","Data":"6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7"} Dec 09 14:49:24 crc kubenswrapper[4770]: I1209 14:49:24.690420 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerStarted","Data":"592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59"} Dec 09 14:49:24 crc kubenswrapper[4770]: I1209 14:49:24.690954 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:49:24 crc kubenswrapper[4770]: I1209 14:49:24.722497 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.645399042 podStartE2EDuration="6.722475034s" podCreationTimestamp="2025-12-09 14:49:18 +0000 UTC" firstStartedPulling="2025-12-09 14:49:19.204557741 +0000 UTC m=+1591.100759877" lastFinishedPulling="2025-12-09 14:49:24.281633733 +0000 UTC m=+1596.177835869" observedRunningTime="2025-12-09 14:49:24.715382776 +0000 UTC m=+1596.611584932" watchObservedRunningTime="2025-12-09 14:49:24.722475034 +0000 UTC m=+1596.618677170" Dec 09 14:49:25 crc kubenswrapper[4770]: I1209 14:49:25.715896 4770 generic.go:334] "Generic (PLEG): container finished" podID="959155d5-0b58-4005-b2cf-5e2dd53e4f06" containerID="0e6bfc815819f1d161e04d054694241812021fef1409346a3c99fe68253fc865" exitCode=0 Dec 09 14:49:25 crc kubenswrapper[4770]: I1209 14:49:25.717814 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-94lv6" event={"ID":"959155d5-0b58-4005-b2cf-5e2dd53e4f06","Type":"ContainerDied","Data":"0e6bfc815819f1d161e04d054694241812021fef1409346a3c99fe68253fc865"} Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.146007 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.254119 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-combined-ca-bundle\") pod \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.254453 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-scripts\") pod \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.254598 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfdfr\" (UniqueName: \"kubernetes.io/projected/959155d5-0b58-4005-b2cf-5e2dd53e4f06-kube-api-access-jfdfr\") pod \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.254787 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-config-data\") pod \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\" (UID: \"959155d5-0b58-4005-b2cf-5e2dd53e4f06\") " Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.260682 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-scripts" (OuterVolumeSpecName: "scripts") pod "959155d5-0b58-4005-b2cf-5e2dd53e4f06" (UID: "959155d5-0b58-4005-b2cf-5e2dd53e4f06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.273041 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959155d5-0b58-4005-b2cf-5e2dd53e4f06-kube-api-access-jfdfr" (OuterVolumeSpecName: "kube-api-access-jfdfr") pod "959155d5-0b58-4005-b2cf-5e2dd53e4f06" (UID: "959155d5-0b58-4005-b2cf-5e2dd53e4f06"). InnerVolumeSpecName "kube-api-access-jfdfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.303476 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "959155d5-0b58-4005-b2cf-5e2dd53e4f06" (UID: "959155d5-0b58-4005-b2cf-5e2dd53e4f06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.306712 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-config-data" (OuterVolumeSpecName: "config-data") pod "959155d5-0b58-4005-b2cf-5e2dd53e4f06" (UID: "959155d5-0b58-4005-b2cf-5e2dd53e4f06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.356867 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.356903 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.356915 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/959155d5-0b58-4005-b2cf-5e2dd53e4f06-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.356923 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfdfr\" (UniqueName: \"kubernetes.io/projected/959155d5-0b58-4005-b2cf-5e2dd53e4f06-kube-api-access-jfdfr\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.734645 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-94lv6" event={"ID":"959155d5-0b58-4005-b2cf-5e2dd53e4f06","Type":"ContainerDied","Data":"8879b828d1086b803a1a74781ab9e69c309e12171a3c974faf41e80e07fec272"} Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.734682 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8879b828d1086b803a1a74781ab9e69c309e12171a3c974faf41e80e07fec272" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.734706 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-94lv6" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.838704 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 14:49:27 crc kubenswrapper[4770]: E1209 14:49:27.839169 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959155d5-0b58-4005-b2cf-5e2dd53e4f06" containerName="nova-cell0-conductor-db-sync" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.839194 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="959155d5-0b58-4005-b2cf-5e2dd53e4f06" containerName="nova-cell0-conductor-db-sync" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.839449 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="959155d5-0b58-4005-b2cf-5e2dd53e4f06" containerName="nova-cell0-conductor-db-sync" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.840293 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.849796 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vwqct" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.849869 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.854771 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.965328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fa7178-8f34-4f79-b917-c3763e01a006-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.965485 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26rkr\" (UniqueName: \"kubernetes.io/projected/44fa7178-8f34-4f79-b917-c3763e01a006-kube-api-access-26rkr\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:27 crc kubenswrapper[4770]: I1209 14:49:27.965572 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fa7178-8f34-4f79-b917-c3763e01a006-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.067168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fa7178-8f34-4f79-b917-c3763e01a006-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.067334 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26rkr\" (UniqueName: \"kubernetes.io/projected/44fa7178-8f34-4f79-b917-c3763e01a006-kube-api-access-26rkr\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.067425 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fa7178-8f34-4f79-b917-c3763e01a006-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.074654 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fa7178-8f34-4f79-b917-c3763e01a006-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.080513 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fa7178-8f34-4f79-b917-c3763e01a006-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.090430 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26rkr\" (UniqueName: \"kubernetes.io/projected/44fa7178-8f34-4f79-b917-c3763e01a006-kube-api-access-26rkr\") pod \"nova-cell0-conductor-0\" (UID: \"44fa7178-8f34-4f79-b917-c3763e01a006\") " pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.159757 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.689667 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 14:49:28 crc kubenswrapper[4770]: I1209 14:49:28.751090 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"44fa7178-8f34-4f79-b917-c3763e01a006","Type":"ContainerStarted","Data":"36f1c744a44518308133f6bd218c4af7d4c37d42107e75dcf945058bdef1d188"} Dec 09 14:49:29 crc kubenswrapper[4770]: I1209 14:49:29.772289 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"44fa7178-8f34-4f79-b917-c3763e01a006","Type":"ContainerStarted","Data":"27400d49d3c3a47ee24e3c72538121e3516d43130213d186e35f541b9fb45d34"} Dec 09 14:49:29 crc kubenswrapper[4770]: I1209 14:49:29.772884 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.193234 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.214092 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=6.21407182 podStartE2EDuration="6.21407182s" podCreationTimestamp="2025-12-09 14:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:49:29.792239052 +0000 UTC m=+1601.688441198" watchObservedRunningTime="2025-12-09 14:49:33.21407182 +0000 UTC m=+1605.110273956" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.439873 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-phmck"] Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.448961 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.455577 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phmck"] Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.509982 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqk9k\" (UniqueName: \"kubernetes.io/projected/12780567-d84b-4bf9-bf73-b409e030f819-kube-api-access-pqk9k\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.510085 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-catalog-content\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.510138 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-utilities\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.611679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqk9k\" (UniqueName: \"kubernetes.io/projected/12780567-d84b-4bf9-bf73-b409e030f819-kube-api-access-pqk9k\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.611842 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-catalog-content\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.612098 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-utilities\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.612657 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-utilities\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.613001 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-catalog-content\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.637385 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqk9k\" (UniqueName: \"kubernetes.io/projected/12780567-d84b-4bf9-bf73-b409e030f819-kube-api-access-pqk9k\") pod \"certified-operators-phmck\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.742861 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-q88sr"] Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.745180 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.747423 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.753824 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.761893 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-q88sr"] Dec 09 14:49:33 crc kubenswrapper[4770]: I1209 14:49:33.782557 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.123037 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.152575 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-scripts\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.152892 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntpf\" (UniqueName: \"kubernetes.io/projected/47bdf914-718d-409a-a1d2-33f40c08e382-kube-api-access-nntpf\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.153119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-config-data\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.255981 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.257637 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-scripts\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.257791 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nntpf\" (UniqueName: \"kubernetes.io/projected/47bdf914-718d-409a-a1d2-33f40c08e382-kube-api-access-nntpf\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.257855 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-config-data\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.272443 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-config-data\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.274955 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.276524 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-scripts\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.464001 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.465992 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.479122 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.579476 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nntpf\" (UniqueName: \"kubernetes.io/projected/47bdf914-718d-409a-a1d2-33f40c08e382-kube-api-access-nntpf\") pod \"nova-cell0-cell-mapping-q88sr\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.603092 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-config-data\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.603278 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47qjh\" (UniqueName: \"kubernetes.io/projected/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-kube-api-access-47qjh\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.603324 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-logs\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.603427 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.668980 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.685387 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.705264 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47qjh\" (UniqueName: \"kubernetes.io/projected/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-kube-api-access-47qjh\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.705312 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-logs\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.705394 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.705494 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-config-data\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.714438 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-logs\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.718277 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-config-data\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.734526 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.734615 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.736492 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.745260 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.761808 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.812282 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47qjh\" (UniqueName: \"kubernetes.io/projected/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-kube-api-access-47qjh\") pod \"nova-api-0\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.828475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.828696 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-config-data\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.828905 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6j6\" (UniqueName: \"kubernetes.io/projected/20d44d5b-f760-4451-bc4d-acbcf679ba89-kube-api-access-wp6j6\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.874500 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.875927 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.877617 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.878808 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.932717 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.932911 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-config-data\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.933047 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp6j6\" (UniqueName: \"kubernetes.io/projected/20d44d5b-f760-4451-bc4d-acbcf679ba89-kube-api-access-wp6j6\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.934401 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.961994 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:34 crc kubenswrapper[4770]: I1209 14:49:34.994650 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-config-data\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.022748 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp6j6\" (UniqueName: \"kubernetes.io/projected/20d44d5b-f760-4451-bc4d-acbcf679ba89-kube-api-access-wp6j6\") pod \"nova-scheduler-0\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " pod="openstack/nova-scheduler-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.052055 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s59nq\" (UniqueName: \"kubernetes.io/projected/c240185b-9a1f-4adb-83bb-05309deb803c-kube-api-access-s59nq\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.052779 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.053015 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.067884 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.069743 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.087158 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.146150 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.158286 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s59nq\" (UniqueName: \"kubernetes.io/projected/c240185b-9a1f-4adb-83bb-05309deb803c-kube-api-access-s59nq\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.158472 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq96j\" (UniqueName: \"kubernetes.io/projected/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-kube-api-access-xq96j\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.158512 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.158544 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-config-data\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.158645 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-logs\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.158676 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.158747 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.182026 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.182168 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.196091 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-ctpqn"] Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.200674 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s59nq\" (UniqueName: \"kubernetes.io/projected/c240185b-9a1f-4adb-83bb-05309deb803c-kube-api-access-s59nq\") pod \"nova-cell1-novncproxy-0\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.217862 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.259909 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289049 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-config-data\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289266 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289304 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqxjq\" (UniqueName: \"kubernetes.io/projected/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-kube-api-access-vqxjq\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289345 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-logs\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289419 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289467 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289576 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289895 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-config\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.289956 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.290002 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq96j\" (UniqueName: \"kubernetes.io/projected/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-kube-api-access-xq96j\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.290897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-logs\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.292244 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-ctpqn"] Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.312534 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-config-data\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.330760 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.362591 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.369061 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq96j\" (UniqueName: \"kubernetes.io/projected/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-kube-api-access-xq96j\") pod \"nova-metadata-0\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.394954 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phmck"] Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.446750 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.452548 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-config\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.452645 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.452949 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.452984 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqxjq\" (UniqueName: \"kubernetes.io/projected/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-kube-api-access-vqxjq\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.453103 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.453204 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.462207 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-config\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.462307 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.466100 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.468143 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.484619 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.490951 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqxjq\" (UniqueName: \"kubernetes.io/projected/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-kube-api-access-vqxjq\") pod \"dnsmasq-dns-884c8b8f5-ctpqn\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:35 crc kubenswrapper[4770]: I1209 14:49:35.570562 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.003313 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phmck" event={"ID":"12780567-d84b-4bf9-bf73-b409e030f819","Type":"ContainerStarted","Data":"ad7a9c8d7d018a1b43fffbed7795dc7af9e2962564109ebe09664b831ee76238"} Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.193886 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95258"] Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.196213 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.201643 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.201999 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.213587 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95258"] Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.314147 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.326682 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-q88sr"] Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.382719 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-scripts\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.382886 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-config-data\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.382915 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.383027 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tcgl\" (UniqueName: \"kubernetes.io/projected/3ef9f303-cdd6-4694-856c-21a1589935dd-kube-api-access-8tcgl\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.485368 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-config-data\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.485712 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.485805 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tcgl\" (UniqueName: \"kubernetes.io/projected/3ef9f303-cdd6-4694-856c-21a1589935dd-kube-api-access-8tcgl\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.485942 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-scripts\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.491144 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.491791 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-config-data\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.500786 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-scripts\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.506761 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tcgl\" (UniqueName: \"kubernetes.io/projected/3ef9f303-cdd6-4694-856c-21a1589935dd-kube-api-access-8tcgl\") pod \"nova-cell1-conductor-db-sync-95258\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.511887 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.535163 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:36 crc kubenswrapper[4770]: I1209 14:49:36.728534 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:37 crc kubenswrapper[4770]: I1209 14:49:37.061715 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-ctpqn"] Dec 09 14:49:37 crc kubenswrapper[4770]: I1209 14:49:37.073203 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5961cd88-e1ac-4930-b9a4-4ca2cf74898a","Type":"ContainerStarted","Data":"6655485ef922406e6809bb256d6f0fd0c6be828aed6f558fc4c5626882b04ff9"} Dec 09 14:49:37 crc kubenswrapper[4770]: I1209 14:49:37.080776 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phmck" event={"ID":"12780567-d84b-4bf9-bf73-b409e030f819","Type":"ContainerStarted","Data":"05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752"} Dec 09 14:49:37 crc kubenswrapper[4770]: I1209 14:49:37.087876 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20d44d5b-f760-4451-bc4d-acbcf679ba89","Type":"ContainerStarted","Data":"526fb2aefad680914ad8b9efa9d81f4d66314a29d80d1d1db3334a56ebb6ca39"} Dec 09 14:49:37 crc kubenswrapper[4770]: I1209 14:49:37.101288 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q88sr" event={"ID":"47bdf914-718d-409a-a1d2-33f40c08e382","Type":"ContainerStarted","Data":"cfdcf09a42e287a2749d7822dbb9a0d2f12163f1646afa4a0888df45b2e61d38"} Dec 09 14:49:37 crc kubenswrapper[4770]: I1209 14:49:37.459107 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:38 crc kubenswrapper[4770]: I1209 14:49:38.212977 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c240185b-9a1f-4adb-83bb-05309deb803c","Type":"ContainerStarted","Data":"9267a5f5988938fbe3c68d8fff5a8667caa69bf643b5d81bc58ae72286b1324e"} Dec 09 14:49:38 crc kubenswrapper[4770]: I1209 14:49:38.236459 4770 generic.go:334] "Generic (PLEG): container finished" podID="12780567-d84b-4bf9-bf73-b409e030f819" containerID="05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752" exitCode=0 Dec 09 14:49:38 crc kubenswrapper[4770]: I1209 14:49:38.236616 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phmck" event={"ID":"12780567-d84b-4bf9-bf73-b409e030f819","Type":"ContainerDied","Data":"05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752"} Dec 09 14:49:38 crc kubenswrapper[4770]: I1209 14:49:38.257809 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" event={"ID":"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4","Type":"ContainerStarted","Data":"86332513deb1e6144e9d3d2d6be7d7f5aae9ab3fe8d8d241c2efc376a2cdbbca"} Dec 09 14:49:38 crc kubenswrapper[4770]: I1209 14:49:38.280158 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q88sr" event={"ID":"47bdf914-718d-409a-a1d2-33f40c08e382","Type":"ContainerStarted","Data":"76ec275019705a80b3d8b44ee3e351629255b750fd74cb2f3437f86cfdbd7fa9"} Dec 09 14:49:38 crc kubenswrapper[4770]: I1209 14:49:38.290994 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2082862c-d44c-4ea1-ba9c-4f2f595df7f8","Type":"ContainerStarted","Data":"f704b8228d6013495c1738165e9b7ede44b408ea770e4c2f5488a3632758cd1f"} Dec 09 14:49:38 crc kubenswrapper[4770]: I1209 14:49:38.386085 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95258"] Dec 09 14:49:38 crc kubenswrapper[4770]: I1209 14:49:38.395532 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-q88sr" podStartSLOduration=5.395503266 podStartE2EDuration="5.395503266s" podCreationTimestamp="2025-12-09 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:49:38.358633249 +0000 UTC m=+1610.254835385" watchObservedRunningTime="2025-12-09 14:49:38.395503266 +0000 UTC m=+1610.291705412" Dec 09 14:49:39 crc kubenswrapper[4770]: I1209 14:49:39.328297 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95258" event={"ID":"3ef9f303-cdd6-4694-856c-21a1589935dd","Type":"ContainerStarted","Data":"529964bbfbc9f68c4ab4597a12859313aadc1f6d9ae136cd8d4473b7b6c88166"} Dec 09 14:49:39 crc kubenswrapper[4770]: I1209 14:49:39.329557 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95258" event={"ID":"3ef9f303-cdd6-4694-856c-21a1589935dd","Type":"ContainerStarted","Data":"c08187f521b8a49244f6810be767fb7c26d132a27532c3229ae175cc3aefc7f6"} Dec 09 14:49:39 crc kubenswrapper[4770]: I1209 14:49:39.342895 4770 generic.go:334] "Generic (PLEG): container finished" podID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" containerID="cd9270f08ab6fa13dabbeab6f340aa44bd92f50505228ead1bb4f3fb962c308d" exitCode=0 Dec 09 14:49:39 crc kubenswrapper[4770]: I1209 14:49:39.344104 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" event={"ID":"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4","Type":"ContainerDied","Data":"cd9270f08ab6fa13dabbeab6f340aa44bd92f50505228ead1bb4f3fb962c308d"} Dec 09 14:49:39 crc kubenswrapper[4770]: I1209 14:49:39.376265 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-95258" podStartSLOduration=3.376044599 podStartE2EDuration="3.376044599s" podCreationTimestamp="2025-12-09 14:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:49:39.360883048 +0000 UTC m=+1611.257085184" watchObservedRunningTime="2025-12-09 14:49:39.376044599 +0000 UTC m=+1611.272246725" Dec 09 14:49:40 crc kubenswrapper[4770]: I1209 14:49:40.377078 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phmck" event={"ID":"12780567-d84b-4bf9-bf73-b409e030f819","Type":"ContainerStarted","Data":"d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02"} Dec 09 14:49:40 crc kubenswrapper[4770]: I1209 14:49:40.388195 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" event={"ID":"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4","Type":"ContainerStarted","Data":"081168f09c0857e1949a4a961d76d0aa87335f44089e34ddc0c7575fbcb8d2d4"} Dec 09 14:49:40 crc kubenswrapper[4770]: I1209 14:49:40.388334 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:40 crc kubenswrapper[4770]: I1209 14:49:40.429473 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" podStartSLOduration=5.429451031 podStartE2EDuration="5.429451031s" podCreationTimestamp="2025-12-09 14:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:49:40.42185297 +0000 UTC m=+1612.318055116" watchObservedRunningTime="2025-12-09 14:49:40.429451031 +0000 UTC m=+1612.325653167" Dec 09 14:49:41 crc kubenswrapper[4770]: I1209 14:49:41.090718 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:41 crc kubenswrapper[4770]: I1209 14:49:41.103770 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:41 crc kubenswrapper[4770]: I1209 14:49:41.754115 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1d87ac62-20d5-476f-97d9-34d8698fc78f" containerName="galera" probeResult="failure" output="command timed out" Dec 09 14:49:41 crc kubenswrapper[4770]: I1209 14:49:41.756792 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1d87ac62-20d5-476f-97d9-34d8698fc78f" containerName="galera" probeResult="failure" output="command timed out" Dec 09 14:49:43 crc kubenswrapper[4770]: I1209 14:49:43.420304 4770 generic.go:334] "Generic (PLEG): container finished" podID="12780567-d84b-4bf9-bf73-b409e030f819" containerID="d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02" exitCode=0 Dec 09 14:49:43 crc kubenswrapper[4770]: I1209 14:49:43.420496 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phmck" event={"ID":"12780567-d84b-4bf9-bf73-b409e030f819","Type":"ContainerDied","Data":"d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02"} Dec 09 14:49:44 crc kubenswrapper[4770]: I1209 14:49:44.244130 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:49:44 crc kubenswrapper[4770]: I1209 14:49:44.244526 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:49:44 crc kubenswrapper[4770]: I1209 14:49:44.244585 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:49:44 crc kubenswrapper[4770]: I1209 14:49:44.245543 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:49:44 crc kubenswrapper[4770]: I1209 14:49:44.245617 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" gracePeriod=600 Dec 09 14:49:45 crc kubenswrapper[4770]: I1209 14:49:45.474179 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" exitCode=0 Dec 09 14:49:45 crc kubenswrapper[4770]: I1209 14:49:45.474234 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b"} Dec 09 14:49:45 crc kubenswrapper[4770]: I1209 14:49:45.474286 4770 scope.go:117] "RemoveContainer" containerID="08e4c65cee400d2486f41106aee41be450d436f2cac9e02f916b74733c20d0e5" Dec 09 14:49:45 crc kubenswrapper[4770]: I1209 14:49:45.573757 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:49:45 crc kubenswrapper[4770]: I1209 14:49:45.766029 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66sj2"] Dec 09 14:49:45 crc kubenswrapper[4770]: I1209 14:49:45.771020 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" podUID="afac8544-3931-4a40-bcd4-73e30c638547" containerName="dnsmasq-dns" containerID="cri-o://da5206a530407a6f7401425c692eb3ca82a14ced0168e8e498b112b83fcfb601" gracePeriod=10 Dec 09 14:49:46 crc kubenswrapper[4770]: I1209 14:49:46.272354 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" podUID="afac8544-3931-4a40-bcd4-73e30c638547" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Dec 09 14:49:46 crc kubenswrapper[4770]: I1209 14:49:46.496120 4770 generic.go:334] "Generic (PLEG): container finished" podID="afac8544-3931-4a40-bcd4-73e30c638547" containerID="da5206a530407a6f7401425c692eb3ca82a14ced0168e8e498b112b83fcfb601" exitCode=0 Dec 09 14:49:46 crc kubenswrapper[4770]: I1209 14:49:46.496181 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" event={"ID":"afac8544-3931-4a40-bcd4-73e30c638547","Type":"ContainerDied","Data":"da5206a530407a6f7401425c692eb3ca82a14ced0168e8e498b112b83fcfb601"} Dec 09 14:49:48 crc kubenswrapper[4770]: I1209 14:49:48.712427 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 14:49:50 crc kubenswrapper[4770]: E1209 14:49:50.068344 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:49:50 crc kubenswrapper[4770]: E1209 14:49:50.420082 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified" Dec 09 14:49:50 crc kubenswrapper[4770]: E1209 14:49:50.420288 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell1-novncproxy-novncproxy,Image:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h5b8h598h67ch7h658h5cfh8h584h569hb6h669h584hc5h577hcch9ch88h5fchd7h58h5fh56dhcfh546h5fdh579h68fh695h548h55dh679q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-novncproxy-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s59nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/vnc_lite.html,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/vnc_lite.html,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/vnc_lite.html,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell1-novncproxy-0_openstack(c240185b-9a1f-4adb-83bb-05309deb803c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 14:49:50 crc kubenswrapper[4770]: E1209 14:49:50.421469 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell1-novncproxy-novncproxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell1-novncproxy-0" podUID="c240185b-9a1f-4adb-83bb-05309deb803c" Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.556925 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:49:50 crc kubenswrapper[4770]: E1209 14:49:50.557505 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.560088 4770 generic.go:334] "Generic (PLEG): container finished" podID="47bdf914-718d-409a-a1d2-33f40c08e382" containerID="76ec275019705a80b3d8b44ee3e351629255b750fd74cb2f3437f86cfdbd7fa9" exitCode=0 Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.560251 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q88sr" event={"ID":"47bdf914-718d-409a-a1d2-33f40c08e382","Type":"ContainerDied","Data":"76ec275019705a80b3d8b44ee3e351629255b750fd74cb2f3437f86cfdbd7fa9"} Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.952562 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.992101 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znkng\" (UniqueName: \"kubernetes.io/projected/afac8544-3931-4a40-bcd4-73e30c638547-kube-api-access-znkng\") pod \"afac8544-3931-4a40-bcd4-73e30c638547\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.992149 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-config\") pod \"afac8544-3931-4a40-bcd4-73e30c638547\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.992323 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-svc\") pod \"afac8544-3931-4a40-bcd4-73e30c638547\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.992449 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-swift-storage-0\") pod \"afac8544-3931-4a40-bcd4-73e30c638547\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.992490 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-nb\") pod \"afac8544-3931-4a40-bcd4-73e30c638547\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " Dec 09 14:49:50 crc kubenswrapper[4770]: I1209 14:49:50.992549 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-sb\") pod \"afac8544-3931-4a40-bcd4-73e30c638547\" (UID: \"afac8544-3931-4a40-bcd4-73e30c638547\") " Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.032576 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afac8544-3931-4a40-bcd4-73e30c638547-kube-api-access-znkng" (OuterVolumeSpecName: "kube-api-access-znkng") pod "afac8544-3931-4a40-bcd4-73e30c638547" (UID: "afac8544-3931-4a40-bcd4-73e30c638547"). InnerVolumeSpecName "kube-api-access-znkng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.094838 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znkng\" (UniqueName: \"kubernetes.io/projected/afac8544-3931-4a40-bcd4-73e30c638547-kube-api-access-znkng\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.263886 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "afac8544-3931-4a40-bcd4-73e30c638547" (UID: "afac8544-3931-4a40-bcd4-73e30c638547"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.279190 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "afac8544-3931-4a40-bcd4-73e30c638547" (UID: "afac8544-3931-4a40-bcd4-73e30c638547"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.280567 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afac8544-3931-4a40-bcd4-73e30c638547" (UID: "afac8544-3931-4a40-bcd4-73e30c638547"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.299929 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.299959 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.299970 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.304699 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-config" (OuterVolumeSpecName: "config") pod "afac8544-3931-4a40-bcd4-73e30c638547" (UID: "afac8544-3931-4a40-bcd4-73e30c638547"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.309590 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "afac8544-3931-4a40-bcd4-73e30c638547" (UID: "afac8544-3931-4a40-bcd4-73e30c638547"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.402441 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.402478 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afac8544-3931-4a40-bcd4-73e30c638547-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.450635 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.503663 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-combined-ca-bundle\") pod \"c240185b-9a1f-4adb-83bb-05309deb803c\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.503726 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-config-data\") pod \"c240185b-9a1f-4adb-83bb-05309deb803c\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.503974 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s59nq\" (UniqueName: \"kubernetes.io/projected/c240185b-9a1f-4adb-83bb-05309deb803c-kube-api-access-s59nq\") pod \"c240185b-9a1f-4adb-83bb-05309deb803c\" (UID: \"c240185b-9a1f-4adb-83bb-05309deb803c\") " Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.509542 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c240185b-9a1f-4adb-83bb-05309deb803c-kube-api-access-s59nq" (OuterVolumeSpecName: "kube-api-access-s59nq") pod "c240185b-9a1f-4adb-83bb-05309deb803c" (UID: "c240185b-9a1f-4adb-83bb-05309deb803c"). InnerVolumeSpecName "kube-api-access-s59nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.509661 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-config-data" (OuterVolumeSpecName: "config-data") pod "c240185b-9a1f-4adb-83bb-05309deb803c" (UID: "c240185b-9a1f-4adb-83bb-05309deb803c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.511905 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c240185b-9a1f-4adb-83bb-05309deb803c" (UID: "c240185b-9a1f-4adb-83bb-05309deb803c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.608991 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s59nq\" (UniqueName: \"kubernetes.io/projected/c240185b-9a1f-4adb-83bb-05309deb803c-kube-api-access-s59nq\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.609022 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.609033 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c240185b-9a1f-4adb-83bb-05309deb803c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.612915 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" event={"ID":"afac8544-3931-4a40-bcd4-73e30c638547","Type":"ContainerDied","Data":"9754d163397458943ab570ecc8454cbf4ed8eaabeb2cb74a560e6e531e7bb118"} Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.612988 4770 scope.go:117] "RemoveContainer" containerID="da5206a530407a6f7401425c692eb3ca82a14ced0168e8e498b112b83fcfb601" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.613108 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-66sj2" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.623746 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c240185b-9a1f-4adb-83bb-05309deb803c","Type":"ContainerDied","Data":"9267a5f5988938fbe3c68d8fff5a8667caa69bf643b5d81bc58ae72286b1324e"} Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.623878 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.631041 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5961cd88-e1ac-4930-b9a4-4ca2cf74898a","Type":"ContainerStarted","Data":"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0"} Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.631093 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5961cd88-e1ac-4930-b9a4-4ca2cf74898a","Type":"ContainerStarted","Data":"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea"} Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.639264 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phmck" event={"ID":"12780567-d84b-4bf9-bf73-b409e030f819","Type":"ContainerStarted","Data":"defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190"} Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.659116 4770 scope.go:117] "RemoveContainer" containerID="76076d51e1ea4da8ceea2338cbdbdb28d7e6f15a7ebc5b083b299965fbb54ce0" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.660025 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20d44d5b-f760-4451-bc4d-acbcf679ba89","Type":"ContainerStarted","Data":"99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db"} Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.666353 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2082862c-d44c-4ea1-ba9c-4f2f595df7f8","Type":"ContainerStarted","Data":"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d"} Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.670047 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66sj2"] Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.695090 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66sj2"] Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.711103 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-phmck" podStartSLOduration=6.455543246 podStartE2EDuration="18.711085144s" podCreationTimestamp="2025-12-09 14:49:33 +0000 UTC" firstStartedPulling="2025-12-09 14:49:38.256950009 +0000 UTC m=+1610.153152145" lastFinishedPulling="2025-12-09 14:49:50.512491907 +0000 UTC m=+1622.408694043" observedRunningTime="2025-12-09 14:49:51.677315642 +0000 UTC m=+1623.573517798" watchObservedRunningTime="2025-12-09 14:49:51.711085144 +0000 UTC m=+1623.607287300" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.714457 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.529512611 podStartE2EDuration="17.714432425s" podCreationTimestamp="2025-12-09 14:49:34 +0000 UTC" firstStartedPulling="2025-12-09 14:49:36.308421611 +0000 UTC m=+1608.204623747" lastFinishedPulling="2025-12-09 14:49:50.493341425 +0000 UTC m=+1622.389543561" observedRunningTime="2025-12-09 14:49:51.699118467 +0000 UTC m=+1623.595320603" watchObservedRunningTime="2025-12-09 14:49:51.714432425 +0000 UTC m=+1623.610634561" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.844870 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.857050 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.874341 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:51 crc kubenswrapper[4770]: E1209 14:49:51.874872 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afac8544-3931-4a40-bcd4-73e30c638547" containerName="init" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.874889 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="afac8544-3931-4a40-bcd4-73e30c638547" containerName="init" Dec 09 14:49:51 crc kubenswrapper[4770]: E1209 14:49:51.874907 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afac8544-3931-4a40-bcd4-73e30c638547" containerName="dnsmasq-dns" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.874914 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="afac8544-3931-4a40-bcd4-73e30c638547" containerName="dnsmasq-dns" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.875125 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="afac8544-3931-4a40-bcd4-73e30c638547" containerName="dnsmasq-dns" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.876005 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.879206 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.881263 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.881403 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.901707 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.908518 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.987462524 podStartE2EDuration="17.90849981s" podCreationTimestamp="2025-12-09 14:49:34 +0000 UTC" firstStartedPulling="2025-12-09 14:49:36.539558135 +0000 UTC m=+1608.435760271" lastFinishedPulling="2025-12-09 14:49:50.460595421 +0000 UTC m=+1622.356797557" observedRunningTime="2025-12-09 14:49:51.845505392 +0000 UTC m=+1623.741707528" watchObservedRunningTime="2025-12-09 14:49:51.90849981 +0000 UTC m=+1623.804701946" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.947810 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.947878 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.947899 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.947938 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:51 crc kubenswrapper[4770]: I1209 14:49:51.948010 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcnr6\" (UniqueName: \"kubernetes.io/projected/f8dce108-23b0-4cc2-bedd-af46b899dcae-kube-api-access-dcnr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.057504 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.057706 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcnr6\" (UniqueName: \"kubernetes.io/projected/f8dce108-23b0-4cc2-bedd-af46b899dcae-kube-api-access-dcnr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.057925 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.057988 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.058008 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.065537 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.065978 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.066195 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.066360 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dce108-23b0-4cc2-bedd-af46b899dcae-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.079081 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcnr6\" (UniqueName: \"kubernetes.io/projected/f8dce108-23b0-4cc2-bedd-af46b899dcae-kube-api-access-dcnr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8dce108-23b0-4cc2-bedd-af46b899dcae\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.232588 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.424733 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.465939 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-combined-ca-bundle\") pod \"47bdf914-718d-409a-a1d2-33f40c08e382\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.466004 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nntpf\" (UniqueName: \"kubernetes.io/projected/47bdf914-718d-409a-a1d2-33f40c08e382-kube-api-access-nntpf\") pod \"47bdf914-718d-409a-a1d2-33f40c08e382\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.466032 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-scripts\") pod \"47bdf914-718d-409a-a1d2-33f40c08e382\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.466194 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-config-data\") pod \"47bdf914-718d-409a-a1d2-33f40c08e382\" (UID: \"47bdf914-718d-409a-a1d2-33f40c08e382\") " Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.474447 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-scripts" (OuterVolumeSpecName: "scripts") pod "47bdf914-718d-409a-a1d2-33f40c08e382" (UID: "47bdf914-718d-409a-a1d2-33f40c08e382"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.480368 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bdf914-718d-409a-a1d2-33f40c08e382-kube-api-access-nntpf" (OuterVolumeSpecName: "kube-api-access-nntpf") pod "47bdf914-718d-409a-a1d2-33f40c08e382" (UID: "47bdf914-718d-409a-a1d2-33f40c08e382"). InnerVolumeSpecName "kube-api-access-nntpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.515063 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-config-data" (OuterVolumeSpecName: "config-data") pod "47bdf914-718d-409a-a1d2-33f40c08e382" (UID: "47bdf914-718d-409a-a1d2-33f40c08e382"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.516618 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47bdf914-718d-409a-a1d2-33f40c08e382" (UID: "47bdf914-718d-409a-a1d2-33f40c08e382"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.569496 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.569545 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nntpf\" (UniqueName: \"kubernetes.io/projected/47bdf914-718d-409a-a1d2-33f40c08e382-kube-api-access-nntpf\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.569559 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.569571 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bdf914-718d-409a-a1d2-33f40c08e382-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.614705 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afac8544-3931-4a40-bcd4-73e30c638547" path="/var/lib/kubelet/pods/afac8544-3931-4a40-bcd4-73e30c638547/volumes" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.615370 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c240185b-9a1f-4adb-83bb-05309deb803c" path="/var/lib/kubelet/pods/c240185b-9a1f-4adb-83bb-05309deb803c/volumes" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.699726 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2082862c-d44c-4ea1-ba9c-4f2f595df7f8","Type":"ContainerStarted","Data":"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44"} Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.699884 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerName="nova-metadata-log" containerID="cri-o://12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d" gracePeriod=30 Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.700528 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerName="nova-metadata-metadata" containerID="cri-o://6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44" gracePeriod=30 Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.714691 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q88sr" event={"ID":"47bdf914-718d-409a-a1d2-33f40c08e382","Type":"ContainerDied","Data":"cfdcf09a42e287a2749d7822dbb9a0d2f12163f1646afa4a0888df45b2e61d38"} Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.715974 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfdcf09a42e287a2749d7822dbb9a0d2f12163f1646afa4a0888df45b2e61d38" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.715028 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q88sr" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.736337 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.789543701 podStartE2EDuration="18.7363161s" podCreationTimestamp="2025-12-09 14:49:34 +0000 UTC" firstStartedPulling="2025-12-09 14:49:37.529169481 +0000 UTC m=+1609.425371617" lastFinishedPulling="2025-12-09 14:49:50.47594189 +0000 UTC m=+1622.372144016" observedRunningTime="2025-12-09 14:49:52.727294413 +0000 UTC m=+1624.623496549" watchObservedRunningTime="2025-12-09 14:49:52.7363161 +0000 UTC m=+1624.632518236" Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.790838 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.802557 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:49:52 crc kubenswrapper[4770]: I1209 14:49:52.845189 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 14:49:52 crc kubenswrapper[4770]: W1209 14:49:52.849169 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8dce108_23b0_4cc2_bedd_af46b899dcae.slice/crio-d49e48b5e2f3976a04fee510039ebdb382967b17f34430397db23ee1e5bacfd7 WatchSource:0}: Error finding container d49e48b5e2f3976a04fee510039ebdb382967b17f34430397db23ee1e5bacfd7: Status 404 returned error can't find the container with id d49e48b5e2f3976a04fee510039ebdb382967b17f34430397db23ee1e5bacfd7 Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.501861 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.502866 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="95b1d2b0-6b25-4853-aae2-9cdc30773854" containerName="kube-state-metrics" containerID="cri-o://41526fda3f0c20637e90ec76b311210a3f964fc0aa8f7ee58559a48c1a5ccee1" gracePeriod=30 Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.671468 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.736880 4770 generic.go:334] "Generic (PLEG): container finished" podID="95b1d2b0-6b25-4853-aae2-9cdc30773854" containerID="41526fda3f0c20637e90ec76b311210a3f964fc0aa8f7ee58559a48c1a5ccee1" exitCode=2 Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.736950 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95b1d2b0-6b25-4853-aae2-9cdc30773854","Type":"ContainerDied","Data":"41526fda3f0c20637e90ec76b311210a3f964fc0aa8f7ee58559a48c1a5ccee1"} Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.739448 4770 generic.go:334] "Generic (PLEG): container finished" podID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerID="6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44" exitCode=0 Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.739477 4770 generic.go:334] "Generic (PLEG): container finished" podID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerID="12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d" exitCode=143 Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.739525 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2082862c-d44c-4ea1-ba9c-4f2f595df7f8","Type":"ContainerDied","Data":"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44"} Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.739554 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2082862c-d44c-4ea1-ba9c-4f2f595df7f8","Type":"ContainerDied","Data":"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d"} Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.739564 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2082862c-d44c-4ea1-ba9c-4f2f595df7f8","Type":"ContainerDied","Data":"f704b8228d6013495c1738165e9b7ede44b408ea770e4c2f5488a3632758cd1f"} Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.739580 4770 scope.go:117] "RemoveContainer" containerID="6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.739649 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.742473 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8dce108-23b0-4cc2-bedd-af46b899dcae","Type":"ContainerStarted","Data":"c7aa5ce032e6c8d18ee1968f6d912ae3d178b126588d92ff9c56a700820b63e5"} Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.742512 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8dce108-23b0-4cc2-bedd-af46b899dcae","Type":"ContainerStarted","Data":"d49e48b5e2f3976a04fee510039ebdb382967b17f34430397db23ee1e5bacfd7"} Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.742850 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerName="nova-api-api" containerID="cri-o://5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0" gracePeriod=30 Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.743107 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="20d44d5b-f760-4451-bc4d-acbcf679ba89" containerName="nova-scheduler-scheduler" containerID="cri-o://99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db" gracePeriod=30 Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.743901 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerName="nova-api-log" containerID="cri-o://ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea" gracePeriod=30 Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.786867 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.788484 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.795821 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-combined-ca-bundle\") pod \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.795857 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-logs\") pod \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.796071 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq96j\" (UniqueName: \"kubernetes.io/projected/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-kube-api-access-xq96j\") pod \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.796177 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-config-data\") pod \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\" (UID: \"2082862c-d44c-4ea1-ba9c-4f2f595df7f8\") " Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.798467 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-logs" (OuterVolumeSpecName: "logs") pod "2082862c-d44c-4ea1-ba9c-4f2f595df7f8" (UID: "2082862c-d44c-4ea1-ba9c-4f2f595df7f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.800509 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.355527885 podStartE2EDuration="2.800487527s" podCreationTimestamp="2025-12-09 14:49:51 +0000 UTC" firstStartedPulling="2025-12-09 14:49:52.851497693 +0000 UTC m=+1624.747699829" lastFinishedPulling="2025-12-09 14:49:53.296457335 +0000 UTC m=+1625.192659471" observedRunningTime="2025-12-09 14:49:53.764239589 +0000 UTC m=+1625.660441725" watchObservedRunningTime="2025-12-09 14:49:53.800487527 +0000 UTC m=+1625.696689663" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.806056 4770 scope.go:117] "RemoveContainer" containerID="12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.807281 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-kube-api-access-xq96j" (OuterVolumeSpecName: "kube-api-access-xq96j") pod "2082862c-d44c-4ea1-ba9c-4f2f595df7f8" (UID: "2082862c-d44c-4ea1-ba9c-4f2f595df7f8"). InnerVolumeSpecName "kube-api-access-xq96j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.849932 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-config-data" (OuterVolumeSpecName: "config-data") pod "2082862c-d44c-4ea1-ba9c-4f2f595df7f8" (UID: "2082862c-d44c-4ea1-ba9c-4f2f595df7f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.856534 4770 scope.go:117] "RemoveContainer" containerID="6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44" Dec 09 14:49:53 crc kubenswrapper[4770]: E1209 14:49:53.858636 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44\": container with ID starting with 6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44 not found: ID does not exist" containerID="6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.858686 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44"} err="failed to get container status \"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44\": rpc error: code = NotFound desc = could not find container \"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44\": container with ID starting with 6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44 not found: ID does not exist" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.858713 4770 scope.go:117] "RemoveContainer" containerID="12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.861189 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2082862c-d44c-4ea1-ba9c-4f2f595df7f8" (UID: "2082862c-d44c-4ea1-ba9c-4f2f595df7f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:53 crc kubenswrapper[4770]: E1209 14:49:53.863123 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d\": container with ID starting with 12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d not found: ID does not exist" containerID="12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.863158 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d"} err="failed to get container status \"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d\": rpc error: code = NotFound desc = could not find container \"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d\": container with ID starting with 12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d not found: ID does not exist" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.863178 4770 scope.go:117] "RemoveContainer" containerID="6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.867405 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44"} err="failed to get container status \"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44\": rpc error: code = NotFound desc = could not find container \"6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44\": container with ID starting with 6729d5350b4c5815e221dfc1173397b2f046b24fd712cd7b5d45e2d267bdad44 not found: ID does not exist" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.867441 4770 scope.go:117] "RemoveContainer" containerID="12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.870184 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d"} err="failed to get container status \"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d\": rpc error: code = NotFound desc = could not find container \"12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d\": container with ID starting with 12cc26ae2c30e67782263cee7be99003bd9df43a93b545ccf162e41930add24d not found: ID does not exist" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.899432 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.899478 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.899491 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq96j\" (UniqueName: \"kubernetes.io/projected/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-kube-api-access-xq96j\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:53 crc kubenswrapper[4770]: I1209 14:49:53.899502 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082862c-d44c-4ea1-ba9c-4f2f595df7f8-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.214416 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.230743 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.248761 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:54 crc kubenswrapper[4770]: E1209 14:49:54.249331 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47bdf914-718d-409a-a1d2-33f40c08e382" containerName="nova-manage" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.249349 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="47bdf914-718d-409a-a1d2-33f40c08e382" containerName="nova-manage" Dec 09 14:49:54 crc kubenswrapper[4770]: E1209 14:49:54.249366 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerName="nova-metadata-metadata" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.249372 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerName="nova-metadata-metadata" Dec 09 14:49:54 crc kubenswrapper[4770]: E1209 14:49:54.249381 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerName="nova-metadata-log" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.249387 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerName="nova-metadata-log" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.249610 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerName="nova-metadata-metadata" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.249634 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="47bdf914-718d-409a-a1d2-33f40c08e382" containerName="nova-manage" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.249645 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" containerName="nova-metadata-log" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.251949 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.254335 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.258095 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.260836 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.424407 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvb4\" (UniqueName: \"kubernetes.io/projected/5df3ea31-537e-474b-9e4a-b06a31d793c9-kube-api-access-hgvb4\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.424791 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.424823 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df3ea31-537e-474b-9e4a-b06a31d793c9-logs\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.424867 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-config-data\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.424934 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.437895 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.526560 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp96k\" (UniqueName: \"kubernetes.io/projected/95b1d2b0-6b25-4853-aae2-9cdc30773854-kube-api-access-mp96k\") pod \"95b1d2b0-6b25-4853-aae2-9cdc30773854\" (UID: \"95b1d2b0-6b25-4853-aae2-9cdc30773854\") " Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.531333 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.531578 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvb4\" (UniqueName: \"kubernetes.io/projected/5df3ea31-537e-474b-9e4a-b06a31d793c9-kube-api-access-hgvb4\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.531676 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.531824 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df3ea31-537e-474b-9e4a-b06a31d793c9-logs\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.532091 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-config-data\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.533556 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df3ea31-537e-474b-9e4a-b06a31d793c9-logs\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.539459 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-config-data\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.541258 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b1d2b0-6b25-4853-aae2-9cdc30773854-kube-api-access-mp96k" (OuterVolumeSpecName: "kube-api-access-mp96k") pod "95b1d2b0-6b25-4853-aae2-9cdc30773854" (UID: "95b1d2b0-6b25-4853-aae2-9cdc30773854"). InnerVolumeSpecName "kube-api-access-mp96k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.555494 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvb4\" (UniqueName: \"kubernetes.io/projected/5df3ea31-537e-474b-9e4a-b06a31d793c9-kube-api-access-hgvb4\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.561275 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.574952 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.612704 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2082862c-d44c-4ea1-ba9c-4f2f595df7f8" path="/var/lib/kubelet/pods/2082862c-d44c-4ea1-ba9c-4f2f595df7f8/volumes" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.637666 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp96k\" (UniqueName: \"kubernetes.io/projected/95b1d2b0-6b25-4853-aae2-9cdc30773854-kube-api-access-mp96k\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.685193 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.724417 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.762397 4770 generic.go:334] "Generic (PLEG): container finished" podID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerID="5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0" exitCode=0 Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.762438 4770 generic.go:334] "Generic (PLEG): container finished" podID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerID="ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea" exitCode=143 Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.762488 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5961cd88-e1ac-4930-b9a4-4ca2cf74898a","Type":"ContainerDied","Data":"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0"} Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.762521 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5961cd88-e1ac-4930-b9a4-4ca2cf74898a","Type":"ContainerDied","Data":"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea"} Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.762532 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5961cd88-e1ac-4930-b9a4-4ca2cf74898a","Type":"ContainerDied","Data":"6655485ef922406e6809bb256d6f0fd0c6be828aed6f558fc4c5626882b04ff9"} Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.762550 4770 scope.go:117] "RemoveContainer" containerID="5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.762708 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.770149 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.770160 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95b1d2b0-6b25-4853-aae2-9cdc30773854","Type":"ContainerDied","Data":"3d3ab94fe68d32754aa6fe8bc0697ec4b5da9bf6b660dc9c9fb47671f63aca3a"} Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.804973 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.820981 4770 scope.go:117] "RemoveContainer" containerID="ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.823041 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.840911 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:49:54 crc kubenswrapper[4770]: E1209 14:49:54.841405 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerName="nova-api-api" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.841430 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerName="nova-api-api" Dec 09 14:49:54 crc kubenswrapper[4770]: E1209 14:49:54.841446 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerName="nova-api-log" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.841455 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerName="nova-api-log" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.841481 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47qjh\" (UniqueName: \"kubernetes.io/projected/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-kube-api-access-47qjh\") pod \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " Dec 09 14:49:54 crc kubenswrapper[4770]: E1209 14:49:54.841509 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b1d2b0-6b25-4853-aae2-9cdc30773854" containerName="kube-state-metrics" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.841519 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b1d2b0-6b25-4853-aae2-9cdc30773854" containerName="kube-state-metrics" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.841825 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-combined-ca-bundle\") pod \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.841864 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-logs\") pod \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.841930 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-config-data\") pod \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\" (UID: \"5961cd88-e1ac-4930-b9a4-4ca2cf74898a\") " Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.842467 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-logs" (OuterVolumeSpecName: "logs") pod "5961cd88-e1ac-4930-b9a4-4ca2cf74898a" (UID: "5961cd88-e1ac-4930-b9a4-4ca2cf74898a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.842613 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.849015 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerName="nova-api-api" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.849124 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b1d2b0-6b25-4853-aae2-9cdc30773854" containerName="kube-state-metrics" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.849162 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" containerName="nova-api-log" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.850499 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.854419 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-phmck" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="registry-server" probeResult="failure" output=< Dec 09 14:49:54 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Dec 09 14:49:54 crc kubenswrapper[4770]: > Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.854439 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-kube-api-access-47qjh" (OuterVolumeSpecName: "kube-api-access-47qjh") pod "5961cd88-e1ac-4930-b9a4-4ca2cf74898a" (UID: "5961cd88-e1ac-4930-b9a4-4ca2cf74898a"). InnerVolumeSpecName "kube-api-access-47qjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.854846 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.855320 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.874182 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.879807 4770 scope.go:117] "RemoveContainer" containerID="5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.883125 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-config-data" (OuterVolumeSpecName: "config-data") pod "5961cd88-e1ac-4930-b9a4-4ca2cf74898a" (UID: "5961cd88-e1ac-4930-b9a4-4ca2cf74898a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:54 crc kubenswrapper[4770]: E1209 14:49:54.883541 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0\": container with ID starting with 5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0 not found: ID does not exist" containerID="5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.883590 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0"} err="failed to get container status \"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0\": rpc error: code = NotFound desc = could not find container \"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0\": container with ID starting with 5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0 not found: ID does not exist" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.883623 4770 scope.go:117] "RemoveContainer" containerID="ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea" Dec 09 14:49:54 crc kubenswrapper[4770]: E1209 14:49:54.884766 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea\": container with ID starting with ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea not found: ID does not exist" containerID="ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.884797 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea"} err="failed to get container status \"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea\": rpc error: code = NotFound desc = could not find container \"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea\": container with ID starting with ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea not found: ID does not exist" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.884816 4770 scope.go:117] "RemoveContainer" containerID="5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.885205 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0"} err="failed to get container status \"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0\": rpc error: code = NotFound desc = could not find container \"5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0\": container with ID starting with 5009805ce8d374257c8adf14d17c9c7d151998b24666a422c4008701c38e81c0 not found: ID does not exist" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.885233 4770 scope.go:117] "RemoveContainer" containerID="ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.891765 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea"} err="failed to get container status \"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea\": rpc error: code = NotFound desc = could not find container \"ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea\": container with ID starting with ec6afbea593f8bb8d11683193fb5fce439ca2cc8bebeff4033bd288764937dea not found: ID does not exist" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.891811 4770 scope.go:117] "RemoveContainer" containerID="41526fda3f0c20637e90ec76b311210a3f964fc0aa8f7ee58559a48c1a5ccee1" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.895744 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5961cd88-e1ac-4930-b9a4-4ca2cf74898a" (UID: "5961cd88-e1ac-4930-b9a4-4ca2cf74898a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.952241 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.952283 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.952362 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.952432 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tghch\" (UniqueName: \"kubernetes.io/projected/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-api-access-tghch\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.952588 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.952602 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:54 crc kubenswrapper[4770]: I1209 14:49:54.952612 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47qjh\" (UniqueName: \"kubernetes.io/projected/5961cd88-e1ac-4930-b9a4-4ca2cf74898a-kube-api-access-47qjh\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.054771 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tghch\" (UniqueName: \"kubernetes.io/projected/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-api-access-tghch\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.054911 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.054935 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.055006 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.060834 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.060911 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.061382 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.076602 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tghch\" (UniqueName: \"kubernetes.io/projected/a9bc8400-6001-4e33-9563-0cff42eceec2-kube-api-access-tghch\") pod \"kube-state-metrics-0\" (UID: \"a9bc8400-6001-4e33-9563-0cff42eceec2\") " pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.195374 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.227769 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.247601 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.260829 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.270036 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.271921 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.276539 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.290926 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.315704 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.361972 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsj7\" (UniqueName: \"kubernetes.io/projected/83b5815d-8094-410f-892b-779c98703730-kube-api-access-cbsj7\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.362039 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-config-data\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.362132 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.362157 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83b5815d-8094-410f-892b-779c98703730-logs\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.464551 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.464589 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83b5815d-8094-410f-892b-779c98703730-logs\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.464675 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbsj7\" (UniqueName: \"kubernetes.io/projected/83b5815d-8094-410f-892b-779c98703730-kube-api-access-cbsj7\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.464740 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-config-data\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.465309 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83b5815d-8094-410f-892b-779c98703730-logs\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.483940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.484129 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-config-data\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.496124 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbsj7\" (UniqueName: \"kubernetes.io/projected/83b5815d-8094-410f-892b-779c98703730-kube-api-access-cbsj7\") pod \"nova-api-0\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.773304 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.791711 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5df3ea31-537e-474b-9e4a-b06a31d793c9","Type":"ContainerStarted","Data":"ed6080c7b52849f7ebfc0d84eb92544f30ba2f3fdbd416074cbc84e1f824f998"} Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.791973 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5df3ea31-537e-474b-9e4a-b06a31d793c9","Type":"ContainerStarted","Data":"9b846a48cbbad46219d43f2f755c7be0951e76f7a0545fb124bd1a3ee0e76e30"} Dec 09 14:49:55 crc kubenswrapper[4770]: I1209 14:49:55.939968 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.330673 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.602197 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5961cd88-e1ac-4930-b9a4-4ca2cf74898a" path="/var/lib/kubelet/pods/5961cd88-e1ac-4930-b9a4-4ca2cf74898a/volumes" Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.603206 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b1d2b0-6b25-4853-aae2-9cdc30773854" path="/var/lib/kubelet/pods/95b1d2b0-6b25-4853-aae2-9cdc30773854/volumes" Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.660927 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.661191 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="ceilometer-central-agent" containerID="cri-o://f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d" gracePeriod=30 Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.661654 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="proxy-httpd" containerID="cri-o://592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59" gracePeriod=30 Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.661727 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="sg-core" containerID="cri-o://6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7" gracePeriod=30 Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.661810 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="ceilometer-notification-agent" containerID="cri-o://7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3" gracePeriod=30 Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.811822 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a9bc8400-6001-4e33-9563-0cff42eceec2","Type":"ContainerStarted","Data":"d11ee1ac5b978b6def6c8895344933f5011df58225a7319df8bdc5d848ec5de6"} Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.813054 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83b5815d-8094-410f-892b-779c98703730","Type":"ContainerStarted","Data":"d0ff4d0005cd625eb5de564ecc70e577d0583620ccb61f0f6d66e5163757e346"} Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.814804 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5df3ea31-537e-474b-9e4a-b06a31d793c9","Type":"ContainerStarted","Data":"48b2ec762655d8c5f528f1714eea10b0d8cb50aeb300eb65f6d1613aac3fac10"} Dec 09 14:49:56 crc kubenswrapper[4770]: I1209 14:49:56.845961 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.845941741 podStartE2EDuration="2.845941741s" podCreationTimestamp="2025-12-09 14:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:49:56.845400995 +0000 UTC m=+1628.741603131" watchObservedRunningTime="2025-12-09 14:49:56.845941741 +0000 UTC m=+1628.742143877" Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.232914 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.831077 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a9bc8400-6001-4e33-9563-0cff42eceec2","Type":"ContainerStarted","Data":"0d3988aa33d3624d48b58b4572f413f684261ea4dd0e5c88db4660b61101d319"} Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.834191 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.839878 4770 generic.go:334] "Generic (PLEG): container finished" podID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerID="592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59" exitCode=0 Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.839917 4770 generic.go:334] "Generic (PLEG): container finished" podID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerID="6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7" exitCode=2 Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.839928 4770 generic.go:334] "Generic (PLEG): container finished" podID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerID="f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d" exitCode=0 Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.839980 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerDied","Data":"592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59"} Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.840080 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerDied","Data":"6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7"} Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.840098 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerDied","Data":"f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d"} Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.846100 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83b5815d-8094-410f-892b-779c98703730","Type":"ContainerStarted","Data":"0b90b1f892d099b262563ca0ed694cc258ce7c88d243e4fdc0b14f527b43d286"} Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.846147 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83b5815d-8094-410f-892b-779c98703730","Type":"ContainerStarted","Data":"c7924ce5fdf72165a219c514d6a53f39f0bcb64b7c42c1b516f7c943a90414cc"} Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.852004 4770 generic.go:334] "Generic (PLEG): container finished" podID="3ef9f303-cdd6-4694-856c-21a1589935dd" containerID="529964bbfbc9f68c4ab4597a12859313aadc1f6d9ae136cd8d4473b7b6c88166" exitCode=0 Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.852511 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95258" event={"ID":"3ef9f303-cdd6-4694-856c-21a1589935dd","Type":"ContainerDied","Data":"529964bbfbc9f68c4ab4597a12859313aadc1f6d9ae136cd8d4473b7b6c88166"} Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.867254 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.7688591970000003 podStartE2EDuration="3.867226919s" podCreationTimestamp="2025-12-09 14:49:54 +0000 UTC" firstStartedPulling="2025-12-09 14:49:55.950543477 +0000 UTC m=+1627.846745613" lastFinishedPulling="2025-12-09 14:49:57.048911199 +0000 UTC m=+1628.945113335" observedRunningTime="2025-12-09 14:49:57.861853652 +0000 UTC m=+1629.758055808" watchObservedRunningTime="2025-12-09 14:49:57.867226919 +0000 UTC m=+1629.763429055" Dec 09 14:49:57 crc kubenswrapper[4770]: I1209 14:49:57.911780 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.911756084 podStartE2EDuration="2.911756084s" podCreationTimestamp="2025-12-09 14:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:49:57.902367348 +0000 UTC m=+1629.798569484" watchObservedRunningTime="2025-12-09 14:49:57.911756084 +0000 UTC m=+1629.807958220" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.305158 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.354101 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tcgl\" (UniqueName: \"kubernetes.io/projected/3ef9f303-cdd6-4694-856c-21a1589935dd-kube-api-access-8tcgl\") pod \"3ef9f303-cdd6-4694-856c-21a1589935dd\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.354267 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-combined-ca-bundle\") pod \"3ef9f303-cdd6-4694-856c-21a1589935dd\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.354388 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-config-data\") pod \"3ef9f303-cdd6-4694-856c-21a1589935dd\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.354418 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-scripts\") pod \"3ef9f303-cdd6-4694-856c-21a1589935dd\" (UID: \"3ef9f303-cdd6-4694-856c-21a1589935dd\") " Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.359716 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-scripts" (OuterVolumeSpecName: "scripts") pod "3ef9f303-cdd6-4694-856c-21a1589935dd" (UID: "3ef9f303-cdd6-4694-856c-21a1589935dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.365890 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef9f303-cdd6-4694-856c-21a1589935dd-kube-api-access-8tcgl" (OuterVolumeSpecName: "kube-api-access-8tcgl") pod "3ef9f303-cdd6-4694-856c-21a1589935dd" (UID: "3ef9f303-cdd6-4694-856c-21a1589935dd"). InnerVolumeSpecName "kube-api-access-8tcgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.375223 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="95b1d2b0-6b25-4853-aae2-9cdc30773854" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": dial tcp 10.217.0.110:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.385322 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-config-data" (OuterVolumeSpecName: "config-data") pod "3ef9f303-cdd6-4694-856c-21a1589935dd" (UID: "3ef9f303-cdd6-4694-856c-21a1589935dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.385942 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ef9f303-cdd6-4694-856c-21a1589935dd" (UID: "3ef9f303-cdd6-4694-856c-21a1589935dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.457181 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tcgl\" (UniqueName: \"kubernetes.io/projected/3ef9f303-cdd6-4694-856c-21a1589935dd-kube-api-access-8tcgl\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.457491 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.457560 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.457628 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef9f303-cdd6-4694-856c-21a1589935dd-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.724790 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.726720 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.874405 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95258" event={"ID":"3ef9f303-cdd6-4694-856c-21a1589935dd","Type":"ContainerDied","Data":"c08187f521b8a49244f6810be767fb7c26d132a27532c3229ae175cc3aefc7f6"} Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.874461 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08187f521b8a49244f6810be767fb7c26d132a27532c3229ae175cc3aefc7f6" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.874833 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95258" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.967308 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 14:49:59 crc kubenswrapper[4770]: E1209 14:49:59.967773 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef9f303-cdd6-4694-856c-21a1589935dd" containerName="nova-cell1-conductor-db-sync" Dec 09 14:49:59 crc kubenswrapper[4770]: I1209 14:49:59.967791 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef9f303-cdd6-4694-856c-21a1589935dd" containerName="nova-cell1-conductor-db-sync" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:49:59.967976 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef9f303-cdd6-4694-856c-21a1589935dd" containerName="nova-cell1-conductor-db-sync" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:49:59.969184 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:49:59.978927 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.057854 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.071176 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsr7\" (UniqueName: \"kubernetes.io/projected/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-kube-api-access-5xsr7\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.071305 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.071354 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.173397 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.173503 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.173585 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xsr7\" (UniqueName: \"kubernetes.io/projected/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-kube-api-access-5xsr7\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.180140 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.180863 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.191641 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xsr7\" (UniqueName: \"kubernetes.io/projected/be69bcf3-ffee-4bfd-a1d2-e3c9337d0722-kube-api-access-5xsr7\") pod \"nova-cell1-conductor-0\" (UID: \"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722\") " pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.390809 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.511584 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.584116 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn6sj\" (UniqueName: \"kubernetes.io/projected/dae509f5-4248-4e67-9d83-9ec678ca0170-kube-api-access-hn6sj\") pod \"dae509f5-4248-4e67-9d83-9ec678ca0170\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.584163 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-log-httpd\") pod \"dae509f5-4248-4e67-9d83-9ec678ca0170\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.584192 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-config-data\") pod \"dae509f5-4248-4e67-9d83-9ec678ca0170\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.584245 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-scripts\") pod \"dae509f5-4248-4e67-9d83-9ec678ca0170\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.584362 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-run-httpd\") pod \"dae509f5-4248-4e67-9d83-9ec678ca0170\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.584453 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-combined-ca-bundle\") pod \"dae509f5-4248-4e67-9d83-9ec678ca0170\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.584495 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-sg-core-conf-yaml\") pod \"dae509f5-4248-4e67-9d83-9ec678ca0170\" (UID: \"dae509f5-4248-4e67-9d83-9ec678ca0170\") " Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.584777 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dae509f5-4248-4e67-9d83-9ec678ca0170" (UID: "dae509f5-4248-4e67-9d83-9ec678ca0170"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.585011 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.585237 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dae509f5-4248-4e67-9d83-9ec678ca0170" (UID: "dae509f5-4248-4e67-9d83-9ec678ca0170"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.590254 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae509f5-4248-4e67-9d83-9ec678ca0170-kube-api-access-hn6sj" (OuterVolumeSpecName: "kube-api-access-hn6sj") pod "dae509f5-4248-4e67-9d83-9ec678ca0170" (UID: "dae509f5-4248-4e67-9d83-9ec678ca0170"). InnerVolumeSpecName "kube-api-access-hn6sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.592559 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-scripts" (OuterVolumeSpecName: "scripts") pod "dae509f5-4248-4e67-9d83-9ec678ca0170" (UID: "dae509f5-4248-4e67-9d83-9ec678ca0170"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.624813 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dae509f5-4248-4e67-9d83-9ec678ca0170" (UID: "dae509f5-4248-4e67-9d83-9ec678ca0170"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.688945 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae509f5-4248-4e67-9d83-9ec678ca0170-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.689024 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.689041 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn6sj\" (UniqueName: \"kubernetes.io/projected/dae509f5-4248-4e67-9d83-9ec678ca0170-kube-api-access-hn6sj\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.689054 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.695315 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dae509f5-4248-4e67-9d83-9ec678ca0170" (UID: "dae509f5-4248-4e67-9d83-9ec678ca0170"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.698138 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-config-data" (OuterVolumeSpecName: "config-data") pod "dae509f5-4248-4e67-9d83-9ec678ca0170" (UID: "dae509f5-4248-4e67-9d83-9ec678ca0170"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.791877 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.791917 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae509f5-4248-4e67-9d83-9ec678ca0170-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.876489 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 14:50:00 crc kubenswrapper[4770]: W1209 14:50:00.879192 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe69bcf3_ffee_4bfd_a1d2_e3c9337d0722.slice/crio-4db2e1c55698a259bdaf27a08187166da1694f82b1a844aef75d56fb1d864105 WatchSource:0}: Error finding container 4db2e1c55698a259bdaf27a08187166da1694f82b1a844aef75d56fb1d864105: Status 404 returned error can't find the container with id 4db2e1c55698a259bdaf27a08187166da1694f82b1a844aef75d56fb1d864105 Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.896901 4770 generic.go:334] "Generic (PLEG): container finished" podID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerID="7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3" exitCode=0 Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.896941 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerDied","Data":"7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3"} Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.896980 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae509f5-4248-4e67-9d83-9ec678ca0170","Type":"ContainerDied","Data":"a3e2698f1515e3cd5f9271a690a9557a0a5c41d7cf7b8cf271dead8ffe5e0a87"} Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.896996 4770 scope.go:117] "RemoveContainer" containerID="592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.897526 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.938517 4770 scope.go:117] "RemoveContainer" containerID="6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.960847 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.966857 4770 scope.go:117] "RemoveContainer" containerID="7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.972524 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.986151 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:00 crc kubenswrapper[4770]: E1209 14:50:00.987880 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="ceilometer-central-agent" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.987914 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="ceilometer-central-agent" Dec 09 14:50:00 crc kubenswrapper[4770]: E1209 14:50:00.987948 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="sg-core" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.987957 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="sg-core" Dec 09 14:50:00 crc kubenswrapper[4770]: E1209 14:50:00.987968 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="ceilometer-notification-agent" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.987976 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="ceilometer-notification-agent" Dec 09 14:50:00 crc kubenswrapper[4770]: E1209 14:50:00.988005 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="proxy-httpd" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.988014 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="proxy-httpd" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.988278 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="sg-core" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.988298 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="ceilometer-notification-agent" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.988310 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="proxy-httpd" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.988332 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" containerName="ceilometer-central-agent" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.993633 4770 scope.go:117] "RemoveContainer" containerID="f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.994811 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.997321 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.998221 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 14:50:00 crc kubenswrapper[4770]: I1209 14:50:00.998513 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.005888 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.033963 4770 scope.go:117] "RemoveContainer" containerID="592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59" Dec 09 14:50:01 crc kubenswrapper[4770]: E1209 14:50:01.034415 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59\": container with ID starting with 592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59 not found: ID does not exist" containerID="592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.034468 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59"} err="failed to get container status \"592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59\": rpc error: code = NotFound desc = could not find container \"592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59\": container with ID starting with 592565e7678425fc6095b6587d71472962ad413465a84fa2e6f8da2187472b59 not found: ID does not exist" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.034489 4770 scope.go:117] "RemoveContainer" containerID="6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7" Dec 09 14:50:01 crc kubenswrapper[4770]: E1209 14:50:01.035038 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7\": container with ID starting with 6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7 not found: ID does not exist" containerID="6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.035086 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7"} err="failed to get container status \"6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7\": rpc error: code = NotFound desc = could not find container \"6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7\": container with ID starting with 6a5f2f1a995ee50808e2055a2da11271a0bed581265cfb2021788eafdbcd3aa7 not found: ID does not exist" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.035113 4770 scope.go:117] "RemoveContainer" containerID="7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3" Dec 09 14:50:01 crc kubenswrapper[4770]: E1209 14:50:01.035442 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3\": container with ID starting with 7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3 not found: ID does not exist" containerID="7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.035490 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3"} err="failed to get container status \"7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3\": rpc error: code = NotFound desc = could not find container \"7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3\": container with ID starting with 7718a5121e8284543a2ac926060776ad1e669e6a9d81d453d88313a52104e7b3 not found: ID does not exist" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.035521 4770 scope.go:117] "RemoveContainer" containerID="f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d" Dec 09 14:50:01 crc kubenswrapper[4770]: E1209 14:50:01.035850 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d\": container with ID starting with f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d not found: ID does not exist" containerID="f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.035873 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d"} err="failed to get container status \"f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d\": rpc error: code = NotFound desc = could not find container \"f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d\": container with ID starting with f4e80a2f76cafa18f7bfdc294caa03c16db7ce3030f5abbd5b6057260b14ef1d not found: ID does not exist" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.097812 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6zsd\" (UniqueName: \"kubernetes.io/projected/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-kube-api-access-k6zsd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.098403 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.098517 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.098630 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-config-data\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.098762 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-run-httpd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.098936 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.099063 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-scripts\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.099460 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-log-httpd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.203040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.203129 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-scripts\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.203291 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-log-httpd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.203422 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6zsd\" (UniqueName: \"kubernetes.io/projected/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-kube-api-access-k6zsd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.203466 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.203495 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.203572 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-config-data\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.203659 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-run-httpd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.204485 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-run-httpd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.205244 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-log-httpd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.216013 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.216757 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-config-data\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.217123 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-scripts\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.218002 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.218022 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.244800 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6zsd\" (UniqueName: \"kubernetes.io/projected/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-kube-api-access-k6zsd\") pod \"ceilometer-0\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.354823 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.589785 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:50:01 crc kubenswrapper[4770]: E1209 14:50:01.590305 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.868146 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.911692 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722","Type":"ContainerStarted","Data":"bdbdaf62d75435edbc6546d630f5ef6f4e06015db547e28020b88b49e6ac87af"} Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.911770 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"be69bcf3-ffee-4bfd-a1d2-e3c9337d0722","Type":"ContainerStarted","Data":"4db2e1c55698a259bdaf27a08187166da1694f82b1a844aef75d56fb1d864105"} Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.911845 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.915151 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerStarted","Data":"9dcb99fdac6fdcef805f35acba49bbc6a37898a7e4925727aef7b4860eafa06c"} Dec 09 14:50:01 crc kubenswrapper[4770]: I1209 14:50:01.944507 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.944485367 podStartE2EDuration="2.944485367s" podCreationTimestamp="2025-12-09 14:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:50:01.942711988 +0000 UTC m=+1633.838914134" watchObservedRunningTime="2025-12-09 14:50:01.944485367 +0000 UTC m=+1633.840687523" Dec 09 14:50:02 crc kubenswrapper[4770]: I1209 14:50:02.233587 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:50:02 crc kubenswrapper[4770]: I1209 14:50:02.253632 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:50:02 crc kubenswrapper[4770]: I1209 14:50:02.607158 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae509f5-4248-4e67-9d83-9ec678ca0170" path="/var/lib/kubelet/pods/dae509f5-4248-4e67-9d83-9ec678ca0170/volumes" Dec 09 14:50:02 crc kubenswrapper[4770]: I1209 14:50:02.931555 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerStarted","Data":"b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca"} Dec 09 14:50:02 crc kubenswrapper[4770]: I1209 14:50:02.949293 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 09 14:50:03 crc kubenswrapper[4770]: I1209 14:50:03.860603 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:50:03 crc kubenswrapper[4770]: I1209 14:50:03.930271 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:50:03 crc kubenswrapper[4770]: I1209 14:50:03.945577 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerStarted","Data":"6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904"} Dec 09 14:50:03 crc kubenswrapper[4770]: I1209 14:50:03.945626 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerStarted","Data":"c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918"} Dec 09 14:50:04 crc kubenswrapper[4770]: I1209 14:50:04.652256 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phmck"] Dec 09 14:50:04 crc kubenswrapper[4770]: I1209 14:50:04.724566 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 14:50:04 crc kubenswrapper[4770]: I1209 14:50:04.724615 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 14:50:04 crc kubenswrapper[4770]: I1209 14:50:04.957095 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-phmck" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="registry-server" containerID="cri-o://defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190" gracePeriod=2 Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.216615 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.531686 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.612447 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqk9k\" (UniqueName: \"kubernetes.io/projected/12780567-d84b-4bf9-bf73-b409e030f819-kube-api-access-pqk9k\") pod \"12780567-d84b-4bf9-bf73-b409e030f819\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.612584 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-utilities\") pod \"12780567-d84b-4bf9-bf73-b409e030f819\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.612618 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-catalog-content\") pod \"12780567-d84b-4bf9-bf73-b409e030f819\" (UID: \"12780567-d84b-4bf9-bf73-b409e030f819\") " Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.613640 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-utilities" (OuterVolumeSpecName: "utilities") pod "12780567-d84b-4bf9-bf73-b409e030f819" (UID: "12780567-d84b-4bf9-bf73-b409e030f819"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.618201 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12780567-d84b-4bf9-bf73-b409e030f819-kube-api-access-pqk9k" (OuterVolumeSpecName: "kube-api-access-pqk9k") pod "12780567-d84b-4bf9-bf73-b409e030f819" (UID: "12780567-d84b-4bf9-bf73-b409e030f819"). InnerVolumeSpecName "kube-api-access-pqk9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.670281 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12780567-d84b-4bf9-bf73-b409e030f819" (UID: "12780567-d84b-4bf9-bf73-b409e030f819"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.718474 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqk9k\" (UniqueName: \"kubernetes.io/projected/12780567-d84b-4bf9-bf73-b409e030f819-kube-api-access-pqk9k\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.718512 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.718525 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12780567-d84b-4bf9-bf73-b409e030f819-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.740061 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.740113 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.773666 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.773971 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.969678 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerStarted","Data":"c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031"} Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.969826 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.972153 4770 generic.go:334] "Generic (PLEG): container finished" podID="12780567-d84b-4bf9-bf73-b409e030f819" containerID="defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190" exitCode=0 Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.972181 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phmck" event={"ID":"12780567-d84b-4bf9-bf73-b409e030f819","Type":"ContainerDied","Data":"defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190"} Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.972207 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phmck" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.972228 4770 scope.go:117] "RemoveContainer" containerID="defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190" Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.972214 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phmck" event={"ID":"12780567-d84b-4bf9-bf73-b409e030f819","Type":"ContainerDied","Data":"ad7a9c8d7d018a1b43fffbed7795dc7af9e2962564109ebe09664b831ee76238"} Dec 09 14:50:05 crc kubenswrapper[4770]: I1209 14:50:05.994727 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.069911236 podStartE2EDuration="5.994707356s" podCreationTimestamp="2025-12-09 14:50:00 +0000 UTC" firstStartedPulling="2025-12-09 14:50:01.879075241 +0000 UTC m=+1633.775277377" lastFinishedPulling="2025-12-09 14:50:04.803871361 +0000 UTC m=+1636.700073497" observedRunningTime="2025-12-09 14:50:05.992412794 +0000 UTC m=+1637.888614930" watchObservedRunningTime="2025-12-09 14:50:05.994707356 +0000 UTC m=+1637.890909492" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.003301 4770 scope.go:117] "RemoveContainer" containerID="d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.018790 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phmck"] Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.024450 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-phmck"] Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.030916 4770 scope.go:117] "RemoveContainer" containerID="05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.077093 4770 scope.go:117] "RemoveContainer" containerID="defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190" Dec 09 14:50:06 crc kubenswrapper[4770]: E1209 14:50:06.077576 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190\": container with ID starting with defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190 not found: ID does not exist" containerID="defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.077622 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190"} err="failed to get container status \"defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190\": rpc error: code = NotFound desc = could not find container \"defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190\": container with ID starting with defb074304b0ef7be26f5888e5f73f3840e229d19fb3a49785c3c71be0422190 not found: ID does not exist" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.077656 4770 scope.go:117] "RemoveContainer" containerID="d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02" Dec 09 14:50:06 crc kubenswrapper[4770]: E1209 14:50:06.077997 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02\": container with ID starting with d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02 not found: ID does not exist" containerID="d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.078214 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02"} err="failed to get container status \"d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02\": rpc error: code = NotFound desc = could not find container \"d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02\": container with ID starting with d7e6c7ad196e9d074aa15cf7c5966cc706b30d01f2310725c09bb4a898cbab02 not found: ID does not exist" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.078233 4770 scope.go:117] "RemoveContainer" containerID="05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752" Dec 09 14:50:06 crc kubenswrapper[4770]: E1209 14:50:06.081118 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752\": container with ID starting with 05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752 not found: ID does not exist" containerID="05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.081168 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752"} err="failed to get container status \"05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752\": rpc error: code = NotFound desc = could not find container \"05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752\": container with ID starting with 05bd8e87e720976c78a517b794b045a7bc6022f09680508e66d9a7600083c752 not found: ID does not exist" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.608387 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12780567-d84b-4bf9-bf73-b409e030f819" path="/var/lib/kubelet/pods/12780567-d84b-4bf9-bf73-b409e030f819/volumes" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.861992 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 14:50:06 crc kubenswrapper[4770]: I1209 14:50:06.861976 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.447123 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.912158 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-nr4sn"] Dec 09 14:50:10 crc kubenswrapper[4770]: E1209 14:50:10.912582 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="extract-content" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.912599 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="extract-content" Dec 09 14:50:10 crc kubenswrapper[4770]: E1209 14:50:10.912616 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="extract-utilities" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.912622 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="extract-utilities" Dec 09 14:50:10 crc kubenswrapper[4770]: E1209 14:50:10.912631 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="registry-server" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.912638 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="registry-server" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.918201 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="12780567-d84b-4bf9-bf73-b409e030f819" containerName="registry-server" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.919267 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.921293 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.922886 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.957814 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nr4sn"] Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.964391 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.964479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-config-data\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.964529 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbt76\" (UniqueName: \"kubernetes.io/projected/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-kube-api-access-pbt76\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:10 crc kubenswrapper[4770]: I1209 14:50:10.964597 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-scripts\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.066937 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.067054 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-config-data\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.067139 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbt76\" (UniqueName: \"kubernetes.io/projected/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-kube-api-access-pbt76\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.067226 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-scripts\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.074131 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-config-data\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.076018 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.087001 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-scripts\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.088527 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbt76\" (UniqueName: \"kubernetes.io/projected/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-kube-api-access-pbt76\") pod \"nova-cell1-cell-mapping-nr4sn\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:11 crc kubenswrapper[4770]: I1209 14:50:11.269337 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:12 crc kubenswrapper[4770]: I1209 14:50:12.686271 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nr4sn"] Dec 09 14:50:13 crc kubenswrapper[4770]: I1209 14:50:13.052027 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nr4sn" event={"ID":"b9decc75-a5aa-4b47-afb5-4f7c95b3796e","Type":"ContainerStarted","Data":"68ab1470e2ce9c49e0a51721502a7ab768924fa479ead3c3f8916880728e50b3"} Dec 09 14:50:13 crc kubenswrapper[4770]: I1209 14:50:13.052422 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nr4sn" event={"ID":"b9decc75-a5aa-4b47-afb5-4f7c95b3796e","Type":"ContainerStarted","Data":"8b9687c3df97e26351c5fabe63131e0ab19ee6cde9c8646be71437210e5f7f4a"} Dec 09 14:50:13 crc kubenswrapper[4770]: I1209 14:50:13.071644 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-nr4sn" podStartSLOduration=3.071626688 podStartE2EDuration="3.071626688s" podCreationTimestamp="2025-12-09 14:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:50:13.066489547 +0000 UTC m=+1644.962691723" watchObservedRunningTime="2025-12-09 14:50:13.071626688 +0000 UTC m=+1644.967828824" Dec 09 14:50:14 crc kubenswrapper[4770]: I1209 14:50:14.729533 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 14:50:14 crc kubenswrapper[4770]: I1209 14:50:14.732905 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 14:50:14 crc kubenswrapper[4770]: I1209 14:50:14.733867 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 14:50:15 crc kubenswrapper[4770]: I1209 14:50:15.084748 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 14:50:15 crc kubenswrapper[4770]: I1209 14:50:15.779155 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 14:50:15 crc kubenswrapper[4770]: I1209 14:50:15.779535 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 14:50:15 crc kubenswrapper[4770]: I1209 14:50:15.781298 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 14:50:15 crc kubenswrapper[4770]: I1209 14:50:15.781934 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.082633 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.085801 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.286707 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dd998c-chhpb"] Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.289091 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.315835 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-chhpb"] Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.428781 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.429126 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.429228 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-svc\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.429286 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7pj\" (UniqueName: \"kubernetes.io/projected/8534b0f6-a782-4296-8a6f-f0eefb01d33a-kube-api-access-8r7pj\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.429350 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-config\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.429466 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.530986 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.531033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-svc\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.531065 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7pj\" (UniqueName: \"kubernetes.io/projected/8534b0f6-a782-4296-8a6f-f0eefb01d33a-kube-api-access-8r7pj\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.531094 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-config\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.531140 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.531183 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.532068 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.532562 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.533077 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-svc\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.533943 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-config\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.534528 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.559701 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7pj\" (UniqueName: \"kubernetes.io/projected/8534b0f6-a782-4296-8a6f-f0eefb01d33a-kube-api-access-8r7pj\") pod \"dnsmasq-dns-54dd998c-chhpb\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.589855 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:50:16 crc kubenswrapper[4770]: E1209 14:50:16.590331 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:50:16 crc kubenswrapper[4770]: I1209 14:50:16.613827 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:17 crc kubenswrapper[4770]: I1209 14:50:17.121162 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-chhpb"] Dec 09 14:50:17 crc kubenswrapper[4770]: W1209 14:50:17.138379 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8534b0f6_a782_4296_8a6f_f0eefb01d33a.slice/crio-9042ac51e9bc479acad542da911c7740f21f6c8b7015842138f953d25d11e507 WatchSource:0}: Error finding container 9042ac51e9bc479acad542da911c7740f21f6c8b7015842138f953d25d11e507: Status 404 returned error can't find the container with id 9042ac51e9bc479acad542da911c7740f21f6c8b7015842138f953d25d11e507 Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.121060 4770 generic.go:334] "Generic (PLEG): container finished" podID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" containerID="e40194ceef5a19e9dfc63b26ff21214ef8ea505ebe09c22e62a297aaea4870fb" exitCode=0 Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.121243 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-chhpb" event={"ID":"8534b0f6-a782-4296-8a6f-f0eefb01d33a","Type":"ContainerDied","Data":"e40194ceef5a19e9dfc63b26ff21214ef8ea505ebe09c22e62a297aaea4870fb"} Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.122460 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-chhpb" event={"ID":"8534b0f6-a782-4296-8a6f-f0eefb01d33a","Type":"ContainerStarted","Data":"9042ac51e9bc479acad542da911c7740f21f6c8b7015842138f953d25d11e507"} Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.529490 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.531236 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="ceilometer-central-agent" containerID="cri-o://b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca" gracePeriod=30 Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.531328 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="sg-core" containerID="cri-o://6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904" gracePeriod=30 Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.531366 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="ceilometer-notification-agent" containerID="cri-o://c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918" gracePeriod=30 Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.531497 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="proxy-httpd" containerID="cri-o://c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031" gracePeriod=30 Dec 09 14:50:18 crc kubenswrapper[4770]: I1209 14:50:18.555118 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.220:3000/\": read tcp 10.217.0.2:49050->10.217.0.220:3000: read: connection reset by peer" Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.135077 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-chhpb" event={"ID":"8534b0f6-a782-4296-8a6f-f0eefb01d33a","Type":"ContainerStarted","Data":"a26abe63e8b325888effe35c6b7744e2299e3cf1a91158c47f3ba62c52e48fc8"} Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.136328 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.139900 4770 generic.go:334] "Generic (PLEG): container finished" podID="b9decc75-a5aa-4b47-afb5-4f7c95b3796e" containerID="68ab1470e2ce9c49e0a51721502a7ab768924fa479ead3c3f8916880728e50b3" exitCode=0 Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.139982 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nr4sn" event={"ID":"b9decc75-a5aa-4b47-afb5-4f7c95b3796e","Type":"ContainerDied","Data":"68ab1470e2ce9c49e0a51721502a7ab768924fa479ead3c3f8916880728e50b3"} Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.147982 4770 generic.go:334] "Generic (PLEG): container finished" podID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerID="c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031" exitCode=0 Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.148024 4770 generic.go:334] "Generic (PLEG): container finished" podID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerID="6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904" exitCode=2 Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.148033 4770 generic.go:334] "Generic (PLEG): container finished" podID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerID="b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca" exitCode=0 Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.148084 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerDied","Data":"c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031"} Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.148166 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerDied","Data":"6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904"} Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.148180 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerDied","Data":"b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca"} Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.187436 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dd998c-chhpb" podStartSLOduration=3.187409161 podStartE2EDuration="3.187409161s" podCreationTimestamp="2025-12-09 14:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:50:19.157933846 +0000 UTC m=+1651.054136002" watchObservedRunningTime="2025-12-09 14:50:19.187409161 +0000 UTC m=+1651.083611297" Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.514498 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.514980 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-api" containerID="cri-o://0b90b1f892d099b262563ca0ed694cc258ce7c88d243e4fdc0b14f527b43d286" gracePeriod=30 Dec 09 14:50:19 crc kubenswrapper[4770]: I1209 14:50:19.514862 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-log" containerID="cri-o://c7924ce5fdf72165a219c514d6a53f39f0bcb64b7c42c1b516f7c943a90414cc" gracePeriod=30 Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.157668 4770 generic.go:334] "Generic (PLEG): container finished" podID="83b5815d-8094-410f-892b-779c98703730" containerID="c7924ce5fdf72165a219c514d6a53f39f0bcb64b7c42c1b516f7c943a90414cc" exitCode=143 Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.157770 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83b5815d-8094-410f-892b-779c98703730","Type":"ContainerDied","Data":"c7924ce5fdf72165a219c514d6a53f39f0bcb64b7c42c1b516f7c943a90414cc"} Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.671111 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.831608 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbt76\" (UniqueName: \"kubernetes.io/projected/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-kube-api-access-pbt76\") pod \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.832053 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-config-data\") pod \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.832105 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-scripts\") pod \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.832277 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-combined-ca-bundle\") pod \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\" (UID: \"b9decc75-a5aa-4b47-afb5-4f7c95b3796e\") " Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.839672 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-kube-api-access-pbt76" (OuterVolumeSpecName: "kube-api-access-pbt76") pod "b9decc75-a5aa-4b47-afb5-4f7c95b3796e" (UID: "b9decc75-a5aa-4b47-afb5-4f7c95b3796e"). InnerVolumeSpecName "kube-api-access-pbt76". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.840818 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-scripts" (OuterVolumeSpecName: "scripts") pod "b9decc75-a5aa-4b47-afb5-4f7c95b3796e" (UID: "b9decc75-a5aa-4b47-afb5-4f7c95b3796e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.890965 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9decc75-a5aa-4b47-afb5-4f7c95b3796e" (UID: "b9decc75-a5aa-4b47-afb5-4f7c95b3796e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.917972 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-config-data" (OuterVolumeSpecName: "config-data") pod "b9decc75-a5aa-4b47-afb5-4f7c95b3796e" (UID: "b9decc75-a5aa-4b47-afb5-4f7c95b3796e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.922590 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.935192 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbt76\" (UniqueName: \"kubernetes.io/projected/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-kube-api-access-pbt76\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.935280 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.935312 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:20 crc kubenswrapper[4770]: I1209 14:50:20.935351 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9decc75-a5aa-4b47-afb5-4f7c95b3796e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.036116 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-sg-core-conf-yaml\") pod \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.036179 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-config-data\") pod \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.036225 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-combined-ca-bundle\") pod \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.036252 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-scripts\") pod \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.036317 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-log-httpd\") pod \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.036360 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6zsd\" (UniqueName: \"kubernetes.io/projected/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-kube-api-access-k6zsd\") pod \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.036442 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-ceilometer-tls-certs\") pod \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.036513 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-run-httpd\") pod \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\" (UID: \"4760c8cc-73e9-46f4-96e3-3745fdca1e2b\") " Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.037565 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4760c8cc-73e9-46f4-96e3-3745fdca1e2b" (UID: "4760c8cc-73e9-46f4-96e3-3745fdca1e2b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.037766 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.037808 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4760c8cc-73e9-46f4-96e3-3745fdca1e2b" (UID: "4760c8cc-73e9-46f4-96e3-3745fdca1e2b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.040204 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-kube-api-access-k6zsd" (OuterVolumeSpecName: "kube-api-access-k6zsd") pod "4760c8cc-73e9-46f4-96e3-3745fdca1e2b" (UID: "4760c8cc-73e9-46f4-96e3-3745fdca1e2b"). InnerVolumeSpecName "kube-api-access-k6zsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.040275 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-scripts" (OuterVolumeSpecName: "scripts") pod "4760c8cc-73e9-46f4-96e3-3745fdca1e2b" (UID: "4760c8cc-73e9-46f4-96e3-3745fdca1e2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.067495 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4760c8cc-73e9-46f4-96e3-3745fdca1e2b" (UID: "4760c8cc-73e9-46f4-96e3-3745fdca1e2b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.115856 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4760c8cc-73e9-46f4-96e3-3745fdca1e2b" (UID: "4760c8cc-73e9-46f4-96e3-3745fdca1e2b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.140818 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.140869 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.140889 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6zsd\" (UniqueName: \"kubernetes.io/projected/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-kube-api-access-k6zsd\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.140910 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.140926 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.164353 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-config-data" (OuterVolumeSpecName: "config-data") pod "4760c8cc-73e9-46f4-96e3-3745fdca1e2b" (UID: "4760c8cc-73e9-46f4-96e3-3745fdca1e2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.172542 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4760c8cc-73e9-46f4-96e3-3745fdca1e2b" (UID: "4760c8cc-73e9-46f4-96e3-3745fdca1e2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.172868 4770 generic.go:334] "Generic (PLEG): container finished" podID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerID="c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918" exitCode=0 Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.172938 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerDied","Data":"c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918"} Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.172970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4760c8cc-73e9-46f4-96e3-3745fdca1e2b","Type":"ContainerDied","Data":"9dcb99fdac6fdcef805f35acba49bbc6a37898a7e4925727aef7b4860eafa06c"} Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.173032 4770 scope.go:117] "RemoveContainer" containerID="c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.173207 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.176802 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nr4sn" event={"ID":"b9decc75-a5aa-4b47-afb5-4f7c95b3796e","Type":"ContainerDied","Data":"8b9687c3df97e26351c5fabe63131e0ab19ee6cde9c8646be71437210e5f7f4a"} Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.176831 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nr4sn" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.176832 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b9687c3df97e26351c5fabe63131e0ab19ee6cde9c8646be71437210e5f7f4a" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.198513 4770 scope.go:117] "RemoveContainer" containerID="6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.228765 4770 scope.go:117] "RemoveContainer" containerID="c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.240541 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.250055 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.250101 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4760c8cc-73e9-46f4-96e3-3745fdca1e2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.281555 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.306925 4770 scope.go:117] "RemoveContainer" containerID="b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.308071 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.309316 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="sg-core" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.309354 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="sg-core" Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.309380 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="ceilometer-central-agent" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.309388 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="ceilometer-central-agent" Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.309397 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9decc75-a5aa-4b47-afb5-4f7c95b3796e" containerName="nova-manage" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.309406 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9decc75-a5aa-4b47-afb5-4f7c95b3796e" containerName="nova-manage" Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.309434 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="proxy-httpd" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.309443 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="proxy-httpd" Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.309504 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="ceilometer-notification-agent" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.309513 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="ceilometer-notification-agent" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.310166 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9decc75-a5aa-4b47-afb5-4f7c95b3796e" containerName="nova-manage" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.310203 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="proxy-httpd" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.310231 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="ceilometer-central-agent" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.310254 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="sg-core" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.310296 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" containerName="ceilometer-notification-agent" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.319900 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.321507 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.322957 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.323478 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.326520 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.330340 4770 scope.go:117] "RemoveContainer" containerID="c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031" Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.330795 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031\": container with ID starting with c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031 not found: ID does not exist" containerID="c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.330826 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031"} err="failed to get container status \"c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031\": rpc error: code = NotFound desc = could not find container \"c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031\": container with ID starting with c399410930ce89fb7b84093671ba7b2ba441bfd9dc3e8c54fb052508a5a0f031 not found: ID does not exist" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.330850 4770 scope.go:117] "RemoveContainer" containerID="6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904" Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.331151 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904\": container with ID starting with 6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904 not found: ID does not exist" containerID="6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.331177 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904"} err="failed to get container status \"6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904\": rpc error: code = NotFound desc = could not find container \"6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904\": container with ID starting with 6b3aaa304cab2d1aa1e2a6b585a7136649ec6ec71c87f5e31b663c5cdda08904 not found: ID does not exist" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.331190 4770 scope.go:117] "RemoveContainer" containerID="c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918" Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.331653 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918\": container with ID starting with c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918 not found: ID does not exist" containerID="c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.331696 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918"} err="failed to get container status \"c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918\": rpc error: code = NotFound desc = could not find container \"c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918\": container with ID starting with c1f961ba751e26e25c15e3fb60e661ae97b6e29c0ae57d62a51d183855f35918 not found: ID does not exist" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.331712 4770 scope.go:117] "RemoveContainer" containerID="b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca" Dec 09 14:50:21 crc kubenswrapper[4770]: E1209 14:50:21.332317 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca\": container with ID starting with b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca not found: ID does not exist" containerID="b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.332381 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca"} err="failed to get container status \"b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca\": rpc error: code = NotFound desc = could not find container \"b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca\": container with ID starting with b79f1819a1aaa9b14eb961b583660627b0354237ad21a8fcb3ba29303b5520ca not found: ID does not exist" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.454230 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.454564 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.454770 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-config-data\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.454844 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-run-httpd\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.454931 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-log-httpd\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.455011 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-scripts\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.455137 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm4vc\" (UniqueName: \"kubernetes.io/projected/94689226-a1d4-484c-a07d-b588d8d905d7-kube-api-access-vm4vc\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.455280 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.492591 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.492966 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-log" containerID="cri-o://ed6080c7b52849f7ebfc0d84eb92544f30ba2f3fdbd416074cbc84e1f824f998" gracePeriod=30 Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.493032 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-metadata" containerID="cri-o://48b2ec762655d8c5f528f1714eea10b0d8cb50aeb300eb65f6d1613aac3fac10" gracePeriod=30 Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.557088 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-scripts\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.557186 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm4vc\" (UniqueName: \"kubernetes.io/projected/94689226-a1d4-484c-a07d-b588d8d905d7-kube-api-access-vm4vc\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.557271 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.557403 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.557509 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.557549 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-config-data\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.557583 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-run-httpd\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.557623 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-log-httpd\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.558204 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-log-httpd\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.559404 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-run-httpd\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.629713 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.631276 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-scripts\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.634064 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-config-data\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.635303 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.637844 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.646331 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm4vc\" (UniqueName: \"kubernetes.io/projected/94689226-a1d4-484c-a07d-b588d8d905d7-kube-api-access-vm4vc\") pod \"ceilometer-0\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " pod="openstack/ceilometer-0" Dec 09 14:50:21 crc kubenswrapper[4770]: I1209 14:50:21.937930 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.197410 4770 generic.go:334] "Generic (PLEG): container finished" podID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerID="ed6080c7b52849f7ebfc0d84eb92544f30ba2f3fdbd416074cbc84e1f824f998" exitCode=143 Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.197447 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5df3ea31-537e-474b-9e4a-b06a31d793c9","Type":"ContainerDied","Data":"ed6080c7b52849f7ebfc0d84eb92544f30ba2f3fdbd416074cbc84e1f824f998"} Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.294789 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7lmzc"] Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.297961 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.315701 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lmzc"] Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.373895 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-utilities\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.373966 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n64rg\" (UniqueName: \"kubernetes.io/projected/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-kube-api-access-n64rg\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.374020 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-catalog-content\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.475873 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-utilities\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.475983 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n64rg\" (UniqueName: \"kubernetes.io/projected/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-kube-api-access-n64rg\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.476032 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-catalog-content\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.476365 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-utilities\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.476563 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-catalog-content\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: W1209 14:50:22.484472 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94689226_a1d4_484c_a07d_b588d8d905d7.slice/crio-9dd8e04e075d6e93f0415aee7f3127983953e4da513f8a93dc274caf14381188 WatchSource:0}: Error finding container 9dd8e04e075d6e93f0415aee7f3127983953e4da513f8a93dc274caf14381188: Status 404 returned error can't find the container with id 9dd8e04e075d6e93f0415aee7f3127983953e4da513f8a93dc274caf14381188 Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.487588 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.493645 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n64rg\" (UniqueName: \"kubernetes.io/projected/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-kube-api-access-n64rg\") pod \"redhat-marketplace-7lmzc\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.599280 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4760c8cc-73e9-46f4-96e3-3745fdca1e2b" path="/var/lib/kubelet/pods/4760c8cc-73e9-46f4-96e3-3745fdca1e2b/volumes" Dec 09 14:50:22 crc kubenswrapper[4770]: I1209 14:50:22.617532 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.084930 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.105245 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lmzc"] Dec 09 14:50:23 crc kubenswrapper[4770]: W1209 14:50:23.113000 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93303d9d_a8d9_4d34_ba35_ad8ec49222f9.slice/crio-971a815d2fa1265c163626e18edeb6946f8a7f3b322a459320f11f0d1b4a9aa0 WatchSource:0}: Error finding container 971a815d2fa1265c163626e18edeb6946f8a7f3b322a459320f11f0d1b4a9aa0: Status 404 returned error can't find the container with id 971a815d2fa1265c163626e18edeb6946f8a7f3b322a459320f11f0d1b4a9aa0 Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.216637 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerStarted","Data":"03f8c51586fdbfa1a58956dc7b3ff9bd8c1b20a0acda12b213ae654b8dbbeb2b"} Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.216679 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerStarted","Data":"9dd8e04e075d6e93f0415aee7f3127983953e4da513f8a93dc274caf14381188"} Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.219238 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lmzc" event={"ID":"93303d9d-a8d9-4d34-ba35-ad8ec49222f9","Type":"ContainerStarted","Data":"971a815d2fa1265c163626e18edeb6946f8a7f3b322a459320f11f0d1b4a9aa0"} Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.222087 4770 generic.go:334] "Generic (PLEG): container finished" podID="83b5815d-8094-410f-892b-779c98703730" containerID="0b90b1f892d099b262563ca0ed694cc258ce7c88d243e4fdc0b14f527b43d286" exitCode=0 Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.222138 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83b5815d-8094-410f-892b-779c98703730","Type":"ContainerDied","Data":"0b90b1f892d099b262563ca0ed694cc258ce7c88d243e4fdc0b14f527b43d286"} Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.243458 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.304495 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-combined-ca-bundle\") pod \"83b5815d-8094-410f-892b-779c98703730\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.304635 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbsj7\" (UniqueName: \"kubernetes.io/projected/83b5815d-8094-410f-892b-779c98703730-kube-api-access-cbsj7\") pod \"83b5815d-8094-410f-892b-779c98703730\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.304705 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83b5815d-8094-410f-892b-779c98703730-logs\") pod \"83b5815d-8094-410f-892b-779c98703730\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.304825 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-config-data\") pod \"83b5815d-8094-410f-892b-779c98703730\" (UID: \"83b5815d-8094-410f-892b-779c98703730\") " Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.307665 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b5815d-8094-410f-892b-779c98703730-logs" (OuterVolumeSpecName: "logs") pod "83b5815d-8094-410f-892b-779c98703730" (UID: "83b5815d-8094-410f-892b-779c98703730"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.308536 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83b5815d-8094-410f-892b-779c98703730-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.318753 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b5815d-8094-410f-892b-779c98703730-kube-api-access-cbsj7" (OuterVolumeSpecName: "kube-api-access-cbsj7") pod "83b5815d-8094-410f-892b-779c98703730" (UID: "83b5815d-8094-410f-892b-779c98703730"). InnerVolumeSpecName "kube-api-access-cbsj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.343813 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83b5815d-8094-410f-892b-779c98703730" (UID: "83b5815d-8094-410f-892b-779c98703730"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.354138 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-config-data" (OuterVolumeSpecName: "config-data") pod "83b5815d-8094-410f-892b-779c98703730" (UID: "83b5815d-8094-410f-892b-779c98703730"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.410684 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.410739 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbsj7\" (UniqueName: \"kubernetes.io/projected/83b5815d-8094-410f-892b-779c98703730-kube-api-access-cbsj7\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:23 crc kubenswrapper[4770]: I1209 14:50:23.410749 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b5815d-8094-410f-892b-779c98703730-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.201748 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.228718 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp6j6\" (UniqueName: \"kubernetes.io/projected/20d44d5b-f760-4451-bc4d-acbcf679ba89-kube-api-access-wp6j6\") pod \"20d44d5b-f760-4451-bc4d-acbcf679ba89\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.228831 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-combined-ca-bundle\") pod \"20d44d5b-f760-4451-bc4d-acbcf679ba89\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.228954 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-config-data\") pod \"20d44d5b-f760-4451-bc4d-acbcf679ba89\" (UID: \"20d44d5b-f760-4451-bc4d-acbcf679ba89\") " Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.234415 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d44d5b-f760-4451-bc4d-acbcf679ba89-kube-api-access-wp6j6" (OuterVolumeSpecName: "kube-api-access-wp6j6") pod "20d44d5b-f760-4451-bc4d-acbcf679ba89" (UID: "20d44d5b-f760-4451-bc4d-acbcf679ba89"). InnerVolumeSpecName "kube-api-access-wp6j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.235840 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerStarted","Data":"2535b3ef1e8c6585600752a235ca88b578d5f9f242262726d610e7f9cc2645f6"} Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.237428 4770 generic.go:334] "Generic (PLEG): container finished" podID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerID="9d51dfebb04af4740ce09b93931a5c273a1cb2dd991fb12f9da11200ff3df191" exitCode=0 Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.237997 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lmzc" event={"ID":"93303d9d-a8d9-4d34-ba35-ad8ec49222f9","Type":"ContainerDied","Data":"9d51dfebb04af4740ce09b93931a5c273a1cb2dd991fb12f9da11200ff3df191"} Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.245142 4770 generic.go:334] "Generic (PLEG): container finished" podID="20d44d5b-f760-4451-bc4d-acbcf679ba89" containerID="99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db" exitCode=137 Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.245207 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.245243 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20d44d5b-f760-4451-bc4d-acbcf679ba89","Type":"ContainerDied","Data":"99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db"} Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.245272 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20d44d5b-f760-4451-bc4d-acbcf679ba89","Type":"ContainerDied","Data":"526fb2aefad680914ad8b9efa9d81f4d66314a29d80d1d1db3334a56ebb6ca39"} Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.245290 4770 scope.go:117] "RemoveContainer" containerID="99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.253824 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83b5815d-8094-410f-892b-779c98703730","Type":"ContainerDied","Data":"d0ff4d0005cd625eb5de564ecc70e577d0583620ccb61f0f6d66e5163757e346"} Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.254050 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.293443 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-config-data" (OuterVolumeSpecName: "config-data") pod "20d44d5b-f760-4451-bc4d-acbcf679ba89" (UID: "20d44d5b-f760-4451-bc4d-acbcf679ba89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.293564 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20d44d5b-f760-4451-bc4d-acbcf679ba89" (UID: "20d44d5b-f760-4451-bc4d-acbcf679ba89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.313246 4770 scope.go:117] "RemoveContainer" containerID="99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db" Dec 09 14:50:24 crc kubenswrapper[4770]: E1209 14:50:24.315925 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db\": container with ID starting with 99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db not found: ID does not exist" containerID="99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.316005 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db"} err="failed to get container status \"99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db\": rpc error: code = NotFound desc = could not find container \"99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db\": container with ID starting with 99f68dc09a77a496e251fe510d2d62505a8b742fb7531f9ef28e9d3dce4a56db not found: ID does not exist" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.316043 4770 scope.go:117] "RemoveContainer" containerID="0b90b1f892d099b262563ca0ed694cc258ce7c88d243e4fdc0b14f527b43d286" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.335395 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp6j6\" (UniqueName: \"kubernetes.io/projected/20d44d5b-f760-4451-bc4d-acbcf679ba89-kube-api-access-wp6j6\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.335432 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.335452 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d44d5b-f760-4451-bc4d-acbcf679ba89-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.343652 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.380421 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.387934 4770 scope.go:117] "RemoveContainer" containerID="c7924ce5fdf72165a219c514d6a53f39f0bcb64b7c42c1b516f7c943a90414cc" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.402099 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 14:50:24 crc kubenswrapper[4770]: E1209 14:50:24.402609 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-log" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.402624 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-log" Dec 09 14:50:24 crc kubenswrapper[4770]: E1209 14:50:24.402636 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-api" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.402641 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-api" Dec 09 14:50:24 crc kubenswrapper[4770]: E1209 14:50:24.402682 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d44d5b-f760-4451-bc4d-acbcf679ba89" containerName="nova-scheduler-scheduler" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.402688 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d44d5b-f760-4451-bc4d-acbcf679ba89" containerName="nova-scheduler-scheduler" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.402973 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d44d5b-f760-4451-bc4d-acbcf679ba89" containerName="nova-scheduler-scheduler" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.402991 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-log" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.403002 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b5815d-8094-410f-892b-779c98703730" containerName="nova-api-api" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.405469 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.407107 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.409159 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.409368 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.419470 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.440222 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-logs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.440325 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8jw\" (UniqueName: \"kubernetes.io/projected/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-kube-api-access-zm8jw\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.440366 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.440452 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.440504 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-config-data\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.440554 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-public-tls-certs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.541327 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-logs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.541926 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-logs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.542160 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8jw\" (UniqueName: \"kubernetes.io/projected/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-kube-api-access-zm8jw\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.542775 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.544070 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.544292 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-config-data\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.544438 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-public-tls-certs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.549671 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.549977 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.550199 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-config-data\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.555014 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-public-tls-certs\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.567281 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8jw\" (UniqueName: \"kubernetes.io/projected/cfa70b39-dc84-4b2a-ad61-9e01efa16ab8-kube-api-access-zm8jw\") pod \"nova-api-0\" (UID: \"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8\") " pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.601873 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b5815d-8094-410f-892b-779c98703730" path="/var/lib/kubelet/pods/83b5815d-8094-410f-892b-779c98703730/volumes" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.682410 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.696820 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.709497 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.711080 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.714375 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.720720 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.745099 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:45190->10.217.0.216:8775: read: connection reset by peer" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.745097 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:45180->10.217.0.216:8775: read: connection reset by peer" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.748271 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-config-data\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.748375 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.748465 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzhz\" (UniqueName: \"kubernetes.io/projected/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-kube-api-access-cbzhz\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.757801 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.849560 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-config-data\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.849841 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.849995 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzhz\" (UniqueName: \"kubernetes.io/projected/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-kube-api-access-cbzhz\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.855838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.856196 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-config-data\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:24 crc kubenswrapper[4770]: I1209 14:50:24.873456 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzhz\" (UniqueName: \"kubernetes.io/projected/16d7dad5-f6f3-418b-989c-2a21ca4d76ef-kube-api-access-cbzhz\") pod \"nova-scheduler-0\" (UID: \"16d7dad5-f6f3-418b-989c-2a21ca4d76ef\") " pod="openstack/nova-scheduler-0" Dec 09 14:50:25 crc kubenswrapper[4770]: I1209 14:50:25.029020 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 14:50:25 crc kubenswrapper[4770]: I1209 14:50:25.293121 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerStarted","Data":"35b69a551eb094c8e7ebfecfc996f8d5fba9a17f2338f2f08c3f72669d3f07fd"} Dec 09 14:50:25 crc kubenswrapper[4770]: I1209 14:50:25.298629 4770 generic.go:334] "Generic (PLEG): container finished" podID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerID="48b2ec762655d8c5f528f1714eea10b0d8cb50aeb300eb65f6d1613aac3fac10" exitCode=0 Dec 09 14:50:25 crc kubenswrapper[4770]: I1209 14:50:25.298719 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5df3ea31-537e-474b-9e4a-b06a31d793c9","Type":"ContainerDied","Data":"48b2ec762655d8c5f528f1714eea10b0d8cb50aeb300eb65f6d1613aac3fac10"} Dec 09 14:50:25 crc kubenswrapper[4770]: I1209 14:50:25.301567 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lmzc" event={"ID":"93303d9d-a8d9-4d34-ba35-ad8ec49222f9","Type":"ContainerStarted","Data":"0c0cd0a55cafa4db431ed6eb71cba2f10081254b92fb37983bb6fba6250e0929"} Dec 09 14:50:25 crc kubenswrapper[4770]: I1209 14:50:25.329329 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 14:50:25 crc kubenswrapper[4770]: I1209 14:50:25.672663 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 14:50:25 crc kubenswrapper[4770]: W1209 14:50:25.731122 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16d7dad5_f6f3_418b_989c_2a21ca4d76ef.slice/crio-a891e7d405922e143a3010de0a69a33bf5e512358ec4899b1614c7b37e854548 WatchSource:0}: Error finding container a891e7d405922e143a3010de0a69a33bf5e512358ec4899b1614c7b37e854548: Status 404 returned error can't find the container with id a891e7d405922e143a3010de0a69a33bf5e512358ec4899b1614c7b37e854548 Dec 09 14:50:25 crc kubenswrapper[4770]: I1209 14:50:25.924836 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.090942 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df3ea31-537e-474b-9e4a-b06a31d793c9-logs\") pod \"5df3ea31-537e-474b-9e4a-b06a31d793c9\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.091170 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-combined-ca-bundle\") pod \"5df3ea31-537e-474b-9e4a-b06a31d793c9\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.091285 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-config-data\") pod \"5df3ea31-537e-474b-9e4a-b06a31d793c9\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.091358 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgvb4\" (UniqueName: \"kubernetes.io/projected/5df3ea31-537e-474b-9e4a-b06a31d793c9-kube-api-access-hgvb4\") pod \"5df3ea31-537e-474b-9e4a-b06a31d793c9\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.091482 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-nova-metadata-tls-certs\") pod \"5df3ea31-537e-474b-9e4a-b06a31d793c9\" (UID: \"5df3ea31-537e-474b-9e4a-b06a31d793c9\") " Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.093125 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df3ea31-537e-474b-9e4a-b06a31d793c9-logs" (OuterVolumeSpecName: "logs") pod "5df3ea31-537e-474b-9e4a-b06a31d793c9" (UID: "5df3ea31-537e-474b-9e4a-b06a31d793c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.107815 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df3ea31-537e-474b-9e4a-b06a31d793c9-kube-api-access-hgvb4" (OuterVolumeSpecName: "kube-api-access-hgvb4") pod "5df3ea31-537e-474b-9e4a-b06a31d793c9" (UID: "5df3ea31-537e-474b-9e4a-b06a31d793c9"). InnerVolumeSpecName "kube-api-access-hgvb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.140555 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5df3ea31-537e-474b-9e4a-b06a31d793c9" (UID: "5df3ea31-537e-474b-9e4a-b06a31d793c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.144913 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-config-data" (OuterVolumeSpecName: "config-data") pod "5df3ea31-537e-474b-9e4a-b06a31d793c9" (UID: "5df3ea31-537e-474b-9e4a-b06a31d793c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.189515 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5df3ea31-537e-474b-9e4a-b06a31d793c9" (UID: "5df3ea31-537e-474b-9e4a-b06a31d793c9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.193837 4770 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.193876 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df3ea31-537e-474b-9e4a-b06a31d793c9-logs\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.193892 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.193903 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df3ea31-537e-474b-9e4a-b06a31d793c9-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.193915 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgvb4\" (UniqueName: \"kubernetes.io/projected/5df3ea31-537e-474b-9e4a-b06a31d793c9-kube-api-access-hgvb4\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.321096 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16d7dad5-f6f3-418b-989c-2a21ca4d76ef","Type":"ContainerStarted","Data":"84740cc44b9d7669b07bc4b93e06d4acfc925947916f6275ff9caf150e64399e"} Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.321142 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16d7dad5-f6f3-418b-989c-2a21ca4d76ef","Type":"ContainerStarted","Data":"a891e7d405922e143a3010de0a69a33bf5e512358ec4899b1614c7b37e854548"} Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.323317 4770 generic.go:334] "Generic (PLEG): container finished" podID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerID="0c0cd0a55cafa4db431ed6eb71cba2f10081254b92fb37983bb6fba6250e0929" exitCode=0 Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.323414 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lmzc" event={"ID":"93303d9d-a8d9-4d34-ba35-ad8ec49222f9","Type":"ContainerDied","Data":"0c0cd0a55cafa4db431ed6eb71cba2f10081254b92fb37983bb6fba6250e0929"} Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.325663 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8","Type":"ContainerStarted","Data":"e68286c2cb63128c0a3609e17c0fbe686c2a7a9ea33475a5752779ca6e29b8b3"} Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.325705 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8","Type":"ContainerStarted","Data":"e4ca14f0f99771c39081675e05b4323093494410c9b978e8f85c41457c80c7a0"} Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.325718 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cfa70b39-dc84-4b2a-ad61-9e01efa16ab8","Type":"ContainerStarted","Data":"df6dc7e415e06cb0217177d5d73ee5c2c456cfd036b994b5fd616a9fc1d95b02"} Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.331082 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerStarted","Data":"f0ac6ddc46124bb26230b22679efe4c6d8a98e0771ba4149f52a3c0b68947041"} Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.331351 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="ceilometer-central-agent" containerID="cri-o://03f8c51586fdbfa1a58956dc7b3ff9bd8c1b20a0acda12b213ae654b8dbbeb2b" gracePeriod=30 Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.331779 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.331868 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="proxy-httpd" containerID="cri-o://f0ac6ddc46124bb26230b22679efe4c6d8a98e0771ba4149f52a3c0b68947041" gracePeriod=30 Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.331934 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="sg-core" containerID="cri-o://35b69a551eb094c8e7ebfecfc996f8d5fba9a17f2338f2f08c3f72669d3f07fd" gracePeriod=30 Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.331983 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="ceilometer-notification-agent" containerID="cri-o://2535b3ef1e8c6585600752a235ca88b578d5f9f242262726d610e7f9cc2645f6" gracePeriod=30 Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.345625 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5df3ea31-537e-474b-9e4a-b06a31d793c9","Type":"ContainerDied","Data":"9b846a48cbbad46219d43f2f755c7be0951e76f7a0545fb124bd1a3ee0e76e30"} Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.345681 4770 scope.go:117] "RemoveContainer" containerID="48b2ec762655d8c5f528f1714eea10b0d8cb50aeb300eb65f6d1613aac3fac10" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.345681 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.355442 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.3554224169999998 podStartE2EDuration="2.355422417s" podCreationTimestamp="2025-12-09 14:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:50:26.342488745 +0000 UTC m=+1658.238690881" watchObservedRunningTime="2025-12-09 14:50:26.355422417 +0000 UTC m=+1658.251624553" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.377910 4770 scope.go:117] "RemoveContainer" containerID="ed6080c7b52849f7ebfc0d84eb92544f30ba2f3fdbd416074cbc84e1f824f998" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.397184 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.186387723 podStartE2EDuration="5.397166297s" podCreationTimestamp="2025-12-09 14:50:21 +0000 UTC" firstStartedPulling="2025-12-09 14:50:22.486959518 +0000 UTC m=+1654.383161654" lastFinishedPulling="2025-12-09 14:50:25.697738092 +0000 UTC m=+1657.593940228" observedRunningTime="2025-12-09 14:50:26.392009717 +0000 UTC m=+1658.288211853" watchObservedRunningTime="2025-12-09 14:50:26.397166297 +0000 UTC m=+1658.293368433" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.421687 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.421663185 podStartE2EDuration="2.421663185s" podCreationTimestamp="2025-12-09 14:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:50:26.415899368 +0000 UTC m=+1658.312101504" watchObservedRunningTime="2025-12-09 14:50:26.421663185 +0000 UTC m=+1658.317865321" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.494067 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.627017 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d44d5b-f760-4451-bc4d-acbcf679ba89" path="/var/lib/kubelet/pods/20d44d5b-f760-4451-bc4d-acbcf679ba89/volumes" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.627681 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.627768 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.643899 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:50:26 crc kubenswrapper[4770]: E1209 14:50:26.644403 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-log" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.644421 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-log" Dec 09 14:50:26 crc kubenswrapper[4770]: E1209 14:50:26.644431 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-metadata" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.644438 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-metadata" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.644635 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-metadata" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.644662 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" containerName="nova-metadata-log" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.645958 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.654807 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.663461 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.667993 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.706257 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.710105 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-config-data\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.710371 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.710586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba5eef0a-3956-4184-a76c-ab3ecc01f110-logs\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.710644 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdztk\" (UniqueName: \"kubernetes.io/projected/ba5eef0a-3956-4184-a76c-ab3ecc01f110-kube-api-access-fdztk\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.784986 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-ctpqn"] Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.785679 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" podUID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" containerName="dnsmasq-dns" containerID="cri-o://081168f09c0857e1949a4a961d76d0aa87335f44089e34ddc0c7575fbcb8d2d4" gracePeriod=10 Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.812778 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba5eef0a-3956-4184-a76c-ab3ecc01f110-logs\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.812850 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdztk\" (UniqueName: \"kubernetes.io/projected/ba5eef0a-3956-4184-a76c-ab3ecc01f110-kube-api-access-fdztk\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.812931 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.813014 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-config-data\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.813104 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.813267 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba5eef0a-3956-4184-a76c-ab3ecc01f110-logs\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.831094 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.834197 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.834376 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdztk\" (UniqueName: \"kubernetes.io/projected/ba5eef0a-3956-4184-a76c-ab3ecc01f110-kube-api-access-fdztk\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.852928 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5eef0a-3956-4184-a76c-ab3ecc01f110-config-data\") pod \"nova-metadata-0\" (UID: \"ba5eef0a-3956-4184-a76c-ab3ecc01f110\") " pod="openstack/nova-metadata-0" Dec 09 14:50:26 crc kubenswrapper[4770]: I1209 14:50:26.987188 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.374534 4770 generic.go:334] "Generic (PLEG): container finished" podID="94689226-a1d4-484c-a07d-b588d8d905d7" containerID="f0ac6ddc46124bb26230b22679efe4c6d8a98e0771ba4149f52a3c0b68947041" exitCode=0 Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.375072 4770 generic.go:334] "Generic (PLEG): container finished" podID="94689226-a1d4-484c-a07d-b588d8d905d7" containerID="35b69a551eb094c8e7ebfecfc996f8d5fba9a17f2338f2f08c3f72669d3f07fd" exitCode=2 Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.375085 4770 generic.go:334] "Generic (PLEG): container finished" podID="94689226-a1d4-484c-a07d-b588d8d905d7" containerID="2535b3ef1e8c6585600752a235ca88b578d5f9f242262726d610e7f9cc2645f6" exitCode=0 Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.374605 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerDied","Data":"f0ac6ddc46124bb26230b22679efe4c6d8a98e0771ba4149f52a3c0b68947041"} Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.375161 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerDied","Data":"35b69a551eb094c8e7ebfecfc996f8d5fba9a17f2338f2f08c3f72669d3f07fd"} Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.375176 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerDied","Data":"2535b3ef1e8c6585600752a235ca88b578d5f9f242262726d610e7f9cc2645f6"} Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.388409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lmzc" event={"ID":"93303d9d-a8d9-4d34-ba35-ad8ec49222f9","Type":"ContainerStarted","Data":"84a3c6833241139020ae2c2f9d325d580c5d1e6ff764e5f5d7ae18d9e6d15490"} Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.390299 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.390451 4770 generic.go:334] "Generic (PLEG): container finished" podID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" containerID="081168f09c0857e1949a4a961d76d0aa87335f44089e34ddc0c7575fbcb8d2d4" exitCode=0 Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.390512 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" event={"ID":"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4","Type":"ContainerDied","Data":"081168f09c0857e1949a4a961d76d0aa87335f44089e34ddc0c7575fbcb8d2d4"} Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.390551 4770 scope.go:117] "RemoveContainer" containerID="081168f09c0857e1949a4a961d76d0aa87335f44089e34ddc0c7575fbcb8d2d4" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.431820 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7lmzc" podStartSLOduration=2.948601 podStartE2EDuration="5.431777889s" podCreationTimestamp="2025-12-09 14:50:22 +0000 UTC" firstStartedPulling="2025-12-09 14:50:24.26405395 +0000 UTC m=+1656.160256086" lastFinishedPulling="2025-12-09 14:50:26.747230839 +0000 UTC m=+1658.643432975" observedRunningTime="2025-12-09 14:50:27.411391582 +0000 UTC m=+1659.307593718" watchObservedRunningTime="2025-12-09 14:50:27.431777889 +0000 UTC m=+1659.327980025" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.459958 4770 scope.go:117] "RemoveContainer" containerID="cd9270f08ab6fa13dabbeab6f340aa44bd92f50505228ead1bb4f3fb962c308d" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.536443 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-nb\") pod \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.536568 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-swift-storage-0\") pod \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.536934 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-svc\") pod \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.537133 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqxjq\" (UniqueName: \"kubernetes.io/projected/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-kube-api-access-vqxjq\") pod \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.537210 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-sb\") pod \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.537262 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-config\") pod \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\" (UID: \"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4\") " Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.603563 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-kube-api-access-vqxjq" (OuterVolumeSpecName: "kube-api-access-vqxjq") pod "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" (UID: "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4"). InnerVolumeSpecName "kube-api-access-vqxjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.625608 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.640251 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqxjq\" (UniqueName: \"kubernetes.io/projected/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-kube-api-access-vqxjq\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.690233 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" (UID: "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.744503 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.762828 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" (UID: "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.789572 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" (UID: "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.801495 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-config" (OuterVolumeSpecName: "config") pod "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" (UID: "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.837062 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" (UID: "46ec111a-8a31-4bb9-bcdf-aa41c88dbea4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.849312 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.849394 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.849409 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:27 crc kubenswrapper[4770]: I1209 14:50:27.849422 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.404759 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba5eef0a-3956-4184-a76c-ab3ecc01f110","Type":"ContainerStarted","Data":"d9b115db0586ece9add7398716d4bc31ec04564d511785383f82b9d51190aa3e"} Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.405304 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba5eef0a-3956-4184-a76c-ab3ecc01f110","Type":"ContainerStarted","Data":"6145f0c189bdfa392003f26f00dec2fd748acfb4dc7c3c8dd7c597e761aaba3e"} Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.405327 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba5eef0a-3956-4184-a76c-ab3ecc01f110","Type":"ContainerStarted","Data":"3e3066eb0f3d11d8abb34b0006b01fe6358b33053ca0bafbf410b2449b8b5c92"} Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.406698 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.406759 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-ctpqn" event={"ID":"46ec111a-8a31-4bb9-bcdf-aa41c88dbea4","Type":"ContainerDied","Data":"86332513deb1e6144e9d3d2d6be7d7f5aae9ab3fe8d8d241c2efc376a2cdbbca"} Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.438172 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.43814307 podStartE2EDuration="2.43814307s" podCreationTimestamp="2025-12-09 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:50:28.428480966 +0000 UTC m=+1660.324683102" watchObservedRunningTime="2025-12-09 14:50:28.43814307 +0000 UTC m=+1660.334345206" Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.464008 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-ctpqn"] Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.477914 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-ctpqn"] Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.602832 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" path="/var/lib/kubelet/pods/46ec111a-8a31-4bb9-bcdf-aa41c88dbea4/volumes" Dec 09 14:50:28 crc kubenswrapper[4770]: I1209 14:50:28.603603 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df3ea31-537e-474b-9e4a-b06a31d793c9" path="/var/lib/kubelet/pods/5df3ea31-537e-474b-9e4a-b06a31d793c9/volumes" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.663564 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b4jdc"] Dec 09 14:50:29 crc kubenswrapper[4770]: E1209 14:50:29.664401 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" containerName="init" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.664417 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" containerName="init" Dec 09 14:50:29 crc kubenswrapper[4770]: E1209 14:50:29.664447 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" containerName="dnsmasq-dns" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.664453 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" containerName="dnsmasq-dns" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.664647 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ec111a-8a31-4bb9-bcdf-aa41c88dbea4" containerName="dnsmasq-dns" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.666471 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.678419 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4jdc"] Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.795413 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k42k4\" (UniqueName: \"kubernetes.io/projected/8c2c48e9-4455-4ff0-9fe4-cc0540654335-kube-api-access-k42k4\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.796161 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-utilities\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.796403 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-catalog-content\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.898577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k42k4\" (UniqueName: \"kubernetes.io/projected/8c2c48e9-4455-4ff0-9fe4-cc0540654335-kube-api-access-k42k4\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.898747 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-utilities\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.898818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-catalog-content\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.899247 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-utilities\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.899258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-catalog-content\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.922688 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k42k4\" (UniqueName: \"kubernetes.io/projected/8c2c48e9-4455-4ff0-9fe4-cc0540654335-kube-api-access-k42k4\") pod \"community-operators-b4jdc\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:29 crc kubenswrapper[4770]: I1209 14:50:29.992144 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:30 crc kubenswrapper[4770]: I1209 14:50:30.030145 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 14:50:30 crc kubenswrapper[4770]: I1209 14:50:30.498033 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4jdc"] Dec 09 14:50:30 crc kubenswrapper[4770]: W1209 14:50:30.499254 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2c48e9_4455_4ff0_9fe4_cc0540654335.slice/crio-effec86962f7459a0876fedd8e619345067a62cdf5d78138de72483317c75b2c WatchSource:0}: Error finding container effec86962f7459a0876fedd8e619345067a62cdf5d78138de72483317c75b2c: Status 404 returned error can't find the container with id effec86962f7459a0876fedd8e619345067a62cdf5d78138de72483317c75b2c Dec 09 14:50:30 crc kubenswrapper[4770]: I1209 14:50:30.588402 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:50:30 crc kubenswrapper[4770]: E1209 14:50:30.588978 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:50:31 crc kubenswrapper[4770]: E1209 14:50:31.310765 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2c48e9_4455_4ff0_9fe4_cc0540654335.slice/crio-22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2c48e9_4455_4ff0_9fe4_cc0540654335.slice/crio-conmon-22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.445876 4770 generic.go:334] "Generic (PLEG): container finished" podID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerID="22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8" exitCode=0 Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.445957 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jdc" event={"ID":"8c2c48e9-4455-4ff0-9fe4-cc0540654335","Type":"ContainerDied","Data":"22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8"} Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.445994 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jdc" event={"ID":"8c2c48e9-4455-4ff0-9fe4-cc0540654335","Type":"ContainerStarted","Data":"effec86962f7459a0876fedd8e619345067a62cdf5d78138de72483317c75b2c"} Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.450162 4770 generic.go:334] "Generic (PLEG): container finished" podID="94689226-a1d4-484c-a07d-b588d8d905d7" containerID="03f8c51586fdbfa1a58956dc7b3ff9bd8c1b20a0acda12b213ae654b8dbbeb2b" exitCode=0 Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.450245 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerDied","Data":"03f8c51586fdbfa1a58956dc7b3ff9bd8c1b20a0acda12b213ae654b8dbbeb2b"} Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.744915 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.842879 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-sg-core-conf-yaml\") pod \"94689226-a1d4-484c-a07d-b588d8d905d7\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.843157 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-scripts\") pod \"94689226-a1d4-484c-a07d-b588d8d905d7\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.843299 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-combined-ca-bundle\") pod \"94689226-a1d4-484c-a07d-b588d8d905d7\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.843457 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm4vc\" (UniqueName: \"kubernetes.io/projected/94689226-a1d4-484c-a07d-b588d8d905d7-kube-api-access-vm4vc\") pod \"94689226-a1d4-484c-a07d-b588d8d905d7\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.843607 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-log-httpd\") pod \"94689226-a1d4-484c-a07d-b588d8d905d7\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.843920 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-ceilometer-tls-certs\") pod \"94689226-a1d4-484c-a07d-b588d8d905d7\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.844048 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-config-data\") pod \"94689226-a1d4-484c-a07d-b588d8d905d7\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.844074 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "94689226-a1d4-484c-a07d-b588d8d905d7" (UID: "94689226-a1d4-484c-a07d-b588d8d905d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.844281 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-run-httpd\") pod \"94689226-a1d4-484c-a07d-b588d8d905d7\" (UID: \"94689226-a1d4-484c-a07d-b588d8d905d7\") " Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.844584 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "94689226-a1d4-484c-a07d-b588d8d905d7" (UID: "94689226-a1d4-484c-a07d-b588d8d905d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.845143 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.846026 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94689226-a1d4-484c-a07d-b588d8d905d7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.849424 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94689226-a1d4-484c-a07d-b588d8d905d7-kube-api-access-vm4vc" (OuterVolumeSpecName: "kube-api-access-vm4vc") pod "94689226-a1d4-484c-a07d-b588d8d905d7" (UID: "94689226-a1d4-484c-a07d-b588d8d905d7"). InnerVolumeSpecName "kube-api-access-vm4vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.849537 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-scripts" (OuterVolumeSpecName: "scripts") pod "94689226-a1d4-484c-a07d-b588d8d905d7" (UID: "94689226-a1d4-484c-a07d-b588d8d905d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.879646 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "94689226-a1d4-484c-a07d-b588d8d905d7" (UID: "94689226-a1d4-484c-a07d-b588d8d905d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.904218 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "94689226-a1d4-484c-a07d-b588d8d905d7" (UID: "94689226-a1d4-484c-a07d-b588d8d905d7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.946626 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94689226-a1d4-484c-a07d-b588d8d905d7" (UID: "94689226-a1d4-484c-a07d-b588d8d905d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.948968 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.949003 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.949018 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.949029 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.949040 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm4vc\" (UniqueName: \"kubernetes.io/projected/94689226-a1d4-484c-a07d-b588d8d905d7-kube-api-access-vm4vc\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.956566 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-config-data" (OuterVolumeSpecName: "config-data") pod "94689226-a1d4-484c-a07d-b588d8d905d7" (UID: "94689226-a1d4-484c-a07d-b588d8d905d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.987327 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 14:50:31 crc kubenswrapper[4770]: I1209 14:50:31.987373 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.051086 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94689226-a1d4-484c-a07d-b588d8d905d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.463174 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94689226-a1d4-484c-a07d-b588d8d905d7","Type":"ContainerDied","Data":"9dd8e04e075d6e93f0415aee7f3127983953e4da513f8a93dc274caf14381188"} Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.463263 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.463542 4770 scope.go:117] "RemoveContainer" containerID="f0ac6ddc46124bb26230b22679efe4c6d8a98e0771ba4149f52a3c0b68947041" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.465595 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jdc" event={"ID":"8c2c48e9-4455-4ff0-9fe4-cc0540654335","Type":"ContainerStarted","Data":"c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3"} Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.484252 4770 scope.go:117] "RemoveContainer" containerID="35b69a551eb094c8e7ebfecfc996f8d5fba9a17f2338f2f08c3f72669d3f07fd" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.519889 4770 scope.go:117] "RemoveContainer" containerID="2535b3ef1e8c6585600752a235ca88b578d5f9f242262726d610e7f9cc2645f6" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.520783 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.545431 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.575799 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:32 crc kubenswrapper[4770]: E1209 14:50:32.576301 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="ceilometer-central-agent" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.576322 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="ceilometer-central-agent" Dec 09 14:50:32 crc kubenswrapper[4770]: E1209 14:50:32.576344 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="sg-core" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.576351 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="sg-core" Dec 09 14:50:32 crc kubenswrapper[4770]: E1209 14:50:32.576368 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="proxy-httpd" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.576374 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="proxy-httpd" Dec 09 14:50:32 crc kubenswrapper[4770]: E1209 14:50:32.576391 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="ceilometer-notification-agent" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.576397 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="ceilometer-notification-agent" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.576597 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="proxy-httpd" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.576619 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="sg-core" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.576632 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="ceilometer-central-agent" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.576645 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" containerName="ceilometer-notification-agent" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.578773 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.582499 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.582718 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.583925 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.587923 4770 scope.go:117] "RemoveContainer" containerID="03f8c51586fdbfa1a58956dc7b3ff9bd8c1b20a0acda12b213ae654b8dbbeb2b" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.617709 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94689226-a1d4-484c-a07d-b588d8d905d7" path="/var/lib/kubelet/pods/94689226-a1d4-484c-a07d-b588d8d905d7/volumes" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.618784 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.618832 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.618863 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.670313 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.671604 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67c6c\" (UniqueName: \"kubernetes.io/projected/1f8bb602-e003-4944-adc9-8205fa6aa12a-kube-api-access-67c6c\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.671698 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-run-httpd\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.671746 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.671878 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-config-data\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.671928 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-scripts\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.672169 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-log-httpd\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.672223 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.672343 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.774796 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67c6c\" (UniqueName: \"kubernetes.io/projected/1f8bb602-e003-4944-adc9-8205fa6aa12a-kube-api-access-67c6c\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.774860 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-run-httpd\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.774883 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.774925 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-config-data\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.774943 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-scripts\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.774997 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-log-httpd\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.775022 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.775059 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.775558 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-run-httpd\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.775568 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-log-httpd\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.780712 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.780711 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.781444 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-config-data\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.782090 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-scripts\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.784348 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.795674 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67c6c\" (UniqueName: \"kubernetes.io/projected/1f8bb602-e003-4944-adc9-8205fa6aa12a-kube-api-access-67c6c\") pod \"ceilometer-0\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " pod="openstack/ceilometer-0" Dec 09 14:50:32 crc kubenswrapper[4770]: I1209 14:50:32.907460 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:50:33 crc kubenswrapper[4770]: I1209 14:50:33.375998 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:50:33 crc kubenswrapper[4770]: W1209 14:50:33.380988 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8bb602_e003_4944_adc9_8205fa6aa12a.slice/crio-f2e0df2f7be4d07899e7a437d1b0aa5aef71387c43c2e5d09f93407f9840f763 WatchSource:0}: Error finding container f2e0df2f7be4d07899e7a437d1b0aa5aef71387c43c2e5d09f93407f9840f763: Status 404 returned error can't find the container with id f2e0df2f7be4d07899e7a437d1b0aa5aef71387c43c2e5d09f93407f9840f763 Dec 09 14:50:33 crc kubenswrapper[4770]: I1209 14:50:33.476175 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerStarted","Data":"f2e0df2f7be4d07899e7a437d1b0aa5aef71387c43c2e5d09f93407f9840f763"} Dec 09 14:50:33 crc kubenswrapper[4770]: I1209 14:50:33.478594 4770 generic.go:334] "Generic (PLEG): container finished" podID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerID="c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3" exitCode=0 Dec 09 14:50:33 crc kubenswrapper[4770]: I1209 14:50:33.478663 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jdc" event={"ID":"8c2c48e9-4455-4ff0-9fe4-cc0540654335","Type":"ContainerDied","Data":"c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3"} Dec 09 14:50:33 crc kubenswrapper[4770]: I1209 14:50:33.554124 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:34 crc kubenswrapper[4770]: I1209 14:50:34.499845 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerStarted","Data":"9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5"} Dec 09 14:50:34 crc kubenswrapper[4770]: I1209 14:50:34.503012 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jdc" event={"ID":"8c2c48e9-4455-4ff0-9fe4-cc0540654335","Type":"ContainerStarted","Data":"00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1"} Dec 09 14:50:34 crc kubenswrapper[4770]: I1209 14:50:34.527675 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b4jdc" podStartSLOduration=2.962877121 podStartE2EDuration="5.527658048s" podCreationTimestamp="2025-12-09 14:50:29 +0000 UTC" firstStartedPulling="2025-12-09 14:50:31.450062547 +0000 UTC m=+1663.346264703" lastFinishedPulling="2025-12-09 14:50:34.014843494 +0000 UTC m=+1665.911045630" observedRunningTime="2025-12-09 14:50:34.520980515 +0000 UTC m=+1666.417182651" watchObservedRunningTime="2025-12-09 14:50:34.527658048 +0000 UTC m=+1666.423860184" Dec 09 14:50:34 crc kubenswrapper[4770]: I1209 14:50:34.759029 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 14:50:34 crc kubenswrapper[4770]: I1209 14:50:34.759094 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 14:50:35 crc kubenswrapper[4770]: I1209 14:50:35.035072 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 14:50:35 crc kubenswrapper[4770]: I1209 14:50:35.053802 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lmzc"] Dec 09 14:50:35 crc kubenswrapper[4770]: I1209 14:50:35.079156 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 14:50:35 crc kubenswrapper[4770]: I1209 14:50:35.518640 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerStarted","Data":"aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393"} Dec 09 14:50:35 crc kubenswrapper[4770]: I1209 14:50:35.518971 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7lmzc" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerName="registry-server" containerID="cri-o://84a3c6833241139020ae2c2f9d325d580c5d1e6ff764e5f5d7ae18d9e6d15490" gracePeriod=2 Dec 09 14:50:35 crc kubenswrapper[4770]: I1209 14:50:35.562480 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 14:50:35 crc kubenswrapper[4770]: I1209 14:50:35.770957 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cfa70b39-dc84-4b2a-ad61-9e01efa16ab8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 14:50:35 crc kubenswrapper[4770]: I1209 14:50:35.770999 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cfa70b39-dc84-4b2a-ad61-9e01efa16ab8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.559979 4770 generic.go:334] "Generic (PLEG): container finished" podID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerID="84a3c6833241139020ae2c2f9d325d580c5d1e6ff764e5f5d7ae18d9e6d15490" exitCode=0 Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.561230 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lmzc" event={"ID":"93303d9d-a8d9-4d34-ba35-ad8ec49222f9","Type":"ContainerDied","Data":"84a3c6833241139020ae2c2f9d325d580c5d1e6ff764e5f5d7ae18d9e6d15490"} Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.581344 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerStarted","Data":"bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23"} Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.736282 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.901638 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-utilities\") pod \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.901698 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n64rg\" (UniqueName: \"kubernetes.io/projected/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-kube-api-access-n64rg\") pod \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.902128 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-catalog-content\") pod \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\" (UID: \"93303d9d-a8d9-4d34-ba35-ad8ec49222f9\") " Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.903011 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-utilities" (OuterVolumeSpecName: "utilities") pod "93303d9d-a8d9-4d34-ba35-ad8ec49222f9" (UID: "93303d9d-a8d9-4d34-ba35-ad8ec49222f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.908853 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-kube-api-access-n64rg" (OuterVolumeSpecName: "kube-api-access-n64rg") pod "93303d9d-a8d9-4d34-ba35-ad8ec49222f9" (UID: "93303d9d-a8d9-4d34-ba35-ad8ec49222f9"). InnerVolumeSpecName "kube-api-access-n64rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.922974 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93303d9d-a8d9-4d34-ba35-ad8ec49222f9" (UID: "93303d9d-a8d9-4d34-ba35-ad8ec49222f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.989701 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 14:50:36 crc kubenswrapper[4770]: I1209 14:50:36.989770 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.004663 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.004699 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n64rg\" (UniqueName: \"kubernetes.io/projected/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-kube-api-access-n64rg\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.004710 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93303d9d-a8d9-4d34-ba35-ad8ec49222f9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.594810 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lmzc" event={"ID":"93303d9d-a8d9-4d34-ba35-ad8ec49222f9","Type":"ContainerDied","Data":"971a815d2fa1265c163626e18edeb6946f8a7f3b322a459320f11f0d1b4a9aa0"} Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.595138 4770 scope.go:117] "RemoveContainer" containerID="84a3c6833241139020ae2c2f9d325d580c5d1e6ff764e5f5d7ae18d9e6d15490" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.594894 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lmzc" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.633624 4770 scope.go:117] "RemoveContainer" containerID="0c0cd0a55cafa4db431ed6eb71cba2f10081254b92fb37983bb6fba6250e0929" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.637108 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lmzc"] Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.651364 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lmzc"] Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.684905 4770 scope.go:117] "RemoveContainer" containerID="9d51dfebb04af4740ce09b93931a5c273a1cb2dd991fb12f9da11200ff3df191" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.995002 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba5eef0a-3956-4184-a76c-ab3ecc01f110" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 14:50:37 crc kubenswrapper[4770]: I1209 14:50:37.998865 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba5eef0a-3956-4184-a76c-ab3ecc01f110" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 14:50:38 crc kubenswrapper[4770]: I1209 14:50:38.600685 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" path="/var/lib/kubelet/pods/93303d9d-a8d9-4d34-ba35-ad8ec49222f9/volumes" Dec 09 14:50:38 crc kubenswrapper[4770]: I1209 14:50:38.606234 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerStarted","Data":"10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87"} Dec 09 14:50:38 crc kubenswrapper[4770]: I1209 14:50:38.606446 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:50:38 crc kubenswrapper[4770]: I1209 14:50:38.641270 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.287655048 podStartE2EDuration="6.641251956s" podCreationTimestamp="2025-12-09 14:50:32 +0000 UTC" firstStartedPulling="2025-12-09 14:50:33.38365632 +0000 UTC m=+1665.279858456" lastFinishedPulling="2025-12-09 14:50:37.737253218 +0000 UTC m=+1669.633455364" observedRunningTime="2025-12-09 14:50:38.6373596 +0000 UTC m=+1670.533561736" watchObservedRunningTime="2025-12-09 14:50:38.641251956 +0000 UTC m=+1670.537454102" Dec 09 14:50:39 crc kubenswrapper[4770]: I1209 14:50:39.993174 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:39 crc kubenswrapper[4770]: I1209 14:50:39.993594 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:40 crc kubenswrapper[4770]: I1209 14:50:40.040934 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:40 crc kubenswrapper[4770]: I1209 14:50:40.678705 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:41 crc kubenswrapper[4770]: I1209 14:50:41.051900 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4jdc"] Dec 09 14:50:41 crc kubenswrapper[4770]: I1209 14:50:41.588376 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:50:41 crc kubenswrapper[4770]: E1209 14:50:41.588865 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:50:42 crc kubenswrapper[4770]: I1209 14:50:42.650511 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b4jdc" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerName="registry-server" containerID="cri-o://00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1" gracePeriod=2 Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.255644 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.369131 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-utilities\") pod \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.369228 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k42k4\" (UniqueName: \"kubernetes.io/projected/8c2c48e9-4455-4ff0-9fe4-cc0540654335-kube-api-access-k42k4\") pod \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.369687 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-catalog-content\") pod \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\" (UID: \"8c2c48e9-4455-4ff0-9fe4-cc0540654335\") " Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.370258 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-utilities" (OuterVolumeSpecName: "utilities") pod "8c2c48e9-4455-4ff0-9fe4-cc0540654335" (UID: "8c2c48e9-4455-4ff0-9fe4-cc0540654335"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.370576 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.376097 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2c48e9-4455-4ff0-9fe4-cc0540654335-kube-api-access-k42k4" (OuterVolumeSpecName: "kube-api-access-k42k4") pod "8c2c48e9-4455-4ff0-9fe4-cc0540654335" (UID: "8c2c48e9-4455-4ff0-9fe4-cc0540654335"). InnerVolumeSpecName "kube-api-access-k42k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.428091 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c2c48e9-4455-4ff0-9fe4-cc0540654335" (UID: "8c2c48e9-4455-4ff0-9fe4-cc0540654335"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.473617 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c2c48e9-4455-4ff0-9fe4-cc0540654335-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.473675 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k42k4\" (UniqueName: \"kubernetes.io/projected/8c2c48e9-4455-4ff0-9fe4-cc0540654335-kube-api-access-k42k4\") on node \"crc\" DevicePath \"\"" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.662273 4770 generic.go:334] "Generic (PLEG): container finished" podID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerID="00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1" exitCode=0 Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.662318 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jdc" event={"ID":"8c2c48e9-4455-4ff0-9fe4-cc0540654335","Type":"ContainerDied","Data":"00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1"} Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.662326 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4jdc" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.662352 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jdc" event={"ID":"8c2c48e9-4455-4ff0-9fe4-cc0540654335","Type":"ContainerDied","Data":"effec86962f7459a0876fedd8e619345067a62cdf5d78138de72483317c75b2c"} Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.662388 4770 scope.go:117] "RemoveContainer" containerID="00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.688834 4770 scope.go:117] "RemoveContainer" containerID="c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.715591 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4jdc"] Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.723659 4770 scope.go:117] "RemoveContainer" containerID="22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.725403 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b4jdc"] Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.781096 4770 scope.go:117] "RemoveContainer" containerID="00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1" Dec 09 14:50:43 crc kubenswrapper[4770]: E1209 14:50:43.789575 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1\": container with ID starting with 00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1 not found: ID does not exist" containerID="00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.789627 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1"} err="failed to get container status \"00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1\": rpc error: code = NotFound desc = could not find container \"00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1\": container with ID starting with 00ce611d5f215fa9d90072ef67b47226e29f9ff090a31aa6695322075bdbffa1 not found: ID does not exist" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.789662 4770 scope.go:117] "RemoveContainer" containerID="c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3" Dec 09 14:50:43 crc kubenswrapper[4770]: E1209 14:50:43.790043 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3\": container with ID starting with c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3 not found: ID does not exist" containerID="c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.790082 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3"} err="failed to get container status \"c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3\": rpc error: code = NotFound desc = could not find container \"c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3\": container with ID starting with c31d649e55a88fad79cc4926b88848c6ccee36a19c3602a0ee0c36bd642f3fd3 not found: ID does not exist" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.790110 4770 scope.go:117] "RemoveContainer" containerID="22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8" Dec 09 14:50:43 crc kubenswrapper[4770]: E1209 14:50:43.790459 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8\": container with ID starting with 22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8 not found: ID does not exist" containerID="22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8" Dec 09 14:50:43 crc kubenswrapper[4770]: I1209 14:50:43.790488 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8"} err="failed to get container status \"22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8\": rpc error: code = NotFound desc = could not find container \"22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8\": container with ID starting with 22ef133d51054f816a1bf827b2358185d27c19b4d7a8f8baa29472359ae2c9f8 not found: ID does not exist" Dec 09 14:50:44 crc kubenswrapper[4770]: I1209 14:50:44.601561 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" path="/var/lib/kubelet/pods/8c2c48e9-4455-4ff0-9fe4-cc0540654335/volumes" Dec 09 14:50:44 crc kubenswrapper[4770]: I1209 14:50:44.770036 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 14:50:44 crc kubenswrapper[4770]: I1209 14:50:44.771450 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 14:50:44 crc kubenswrapper[4770]: I1209 14:50:44.772200 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 14:50:44 crc kubenswrapper[4770]: I1209 14:50:44.791560 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 14:50:45 crc kubenswrapper[4770]: I1209 14:50:45.682752 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 14:50:45 crc kubenswrapper[4770]: I1209 14:50:45.699844 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 14:50:46 crc kubenswrapper[4770]: I1209 14:50:46.998229 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 14:50:46 crc kubenswrapper[4770]: I1209 14:50:46.999829 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 14:50:47 crc kubenswrapper[4770]: I1209 14:50:47.004673 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 14:50:47 crc kubenswrapper[4770]: I1209 14:50:47.704888 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 14:50:52 crc kubenswrapper[4770]: I1209 14:50:52.588398 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:50:52 crc kubenswrapper[4770]: E1209 14:50:52.589268 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:51:02 crc kubenswrapper[4770]: I1209 14:51:02.918155 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 14:51:04 crc kubenswrapper[4770]: I1209 14:51:04.588789 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:51:04 crc kubenswrapper[4770]: E1209 14:51:04.589354 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.514591 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-9vv8k"] Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.515095 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-9vv8k"] Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.555157 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-6pmzg"] Dec 09 14:51:14 crc kubenswrapper[4770]: E1209 14:51:14.555674 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerName="registry-server" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.555741 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerName="registry-server" Dec 09 14:51:14 crc kubenswrapper[4770]: E1209 14:51:14.555762 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerName="extract-utilities" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.555770 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerName="extract-utilities" Dec 09 14:51:14 crc kubenswrapper[4770]: E1209 14:51:14.555781 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerName="extract-utilities" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.555788 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerName="extract-utilities" Dec 09 14:51:14 crc kubenswrapper[4770]: E1209 14:51:14.555800 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerName="extract-content" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.555805 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerName="extract-content" Dec 09 14:51:14 crc kubenswrapper[4770]: E1209 14:51:14.555820 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerName="registry-server" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.555825 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerName="registry-server" Dec 09 14:51:14 crc kubenswrapper[4770]: E1209 14:51:14.555832 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerName="extract-content" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.555837 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerName="extract-content" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.556089 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2c48e9-4455-4ff0-9fe4-cc0540654335" containerName="registry-server" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.556105 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="93303d9d-a8d9-4d34-ba35-ad8ec49222f9" containerName="registry-server" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.561347 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.564546 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.573428 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-6pmzg"] Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.607963 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbfdafc8-508d-4eec-9496-7058a6d1d49b" path="/var/lib/kubelet/pods/fbfdafc8-508d-4eec-9496-7058a6d1d49b/volumes" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.640716 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-scripts\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.640932 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6l8r\" (UniqueName: \"kubernetes.io/projected/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-kube-api-access-z6l8r\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.640964 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-config-data\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.641015 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-certs\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.641204 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-combined-ca-bundle\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.743336 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-scripts\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.743509 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6l8r\" (UniqueName: \"kubernetes.io/projected/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-kube-api-access-z6l8r\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.743534 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-config-data\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.743587 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-certs\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.743644 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-combined-ca-bundle\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.749404 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-certs\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.750492 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-combined-ca-bundle\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.751240 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-scripts\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.752613 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-config-data\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.770969 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6l8r\" (UniqueName: \"kubernetes.io/projected/73c9246e-4ec7-4b19-abcc-8df4fc43a74d-kube-api-access-z6l8r\") pod \"cloudkitty-db-sync-6pmzg\" (UID: \"73c9246e-4ec7-4b19-abcc-8df4fc43a74d\") " pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:14 crc kubenswrapper[4770]: I1209 14:51:14.885503 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6pmzg" Dec 09 14:51:15 crc kubenswrapper[4770]: I1209 14:51:15.529239 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-6pmzg"] Dec 09 14:51:15 crc kubenswrapper[4770]: I1209 14:51:15.532812 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:51:15 crc kubenswrapper[4770]: E1209 14:51:15.663357 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:51:15 crc kubenswrapper[4770]: E1209 14:51:15.663409 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:51:15 crc kubenswrapper[4770]: E1209 14:51:15.663541 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:51:15 crc kubenswrapper[4770]: E1209 14:51:15.665511 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:51:16 crc kubenswrapper[4770]: I1209 14:51:16.021988 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-6pmzg" event={"ID":"73c9246e-4ec7-4b19-abcc-8df4fc43a74d","Type":"ContainerStarted","Data":"cb189b218ad8419b0f69d24b8169cba452ac2d0bf2f68662364d32cdc1f535ac"} Dec 09 14:51:16 crc kubenswrapper[4770]: E1209 14:51:16.023844 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:51:16 crc kubenswrapper[4770]: I1209 14:51:16.400662 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:51:16 crc kubenswrapper[4770]: I1209 14:51:16.804207 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:51:16 crc kubenswrapper[4770]: I1209 14:51:16.804508 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="ceilometer-central-agent" containerID="cri-o://9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5" gracePeriod=30 Dec 09 14:51:16 crc kubenswrapper[4770]: I1209 14:51:16.804596 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="proxy-httpd" containerID="cri-o://10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87" gracePeriod=30 Dec 09 14:51:16 crc kubenswrapper[4770]: I1209 14:51:16.804678 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="sg-core" containerID="cri-o://bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23" gracePeriod=30 Dec 09 14:51:16 crc kubenswrapper[4770]: I1209 14:51:16.804745 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="ceilometer-notification-agent" containerID="cri-o://aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393" gracePeriod=30 Dec 09 14:51:17 crc kubenswrapper[4770]: I1209 14:51:17.036089 4770 generic.go:334] "Generic (PLEG): container finished" podID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerID="bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23" exitCode=2 Dec 09 14:51:17 crc kubenswrapper[4770]: I1209 14:51:17.036142 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerDied","Data":"bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23"} Dec 09 14:51:17 crc kubenswrapper[4770]: E1209 14:51:17.038360 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:51:17 crc kubenswrapper[4770]: I1209 14:51:17.790529 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.047541 4770 generic.go:334] "Generic (PLEG): container finished" podID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerID="10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87" exitCode=0 Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.047574 4770 generic.go:334] "Generic (PLEG): container finished" podID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerID="9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5" exitCode=0 Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.047594 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerDied","Data":"10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87"} Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.047617 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerDied","Data":"9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5"} Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.810171 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.842793 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-sg-core-conf-yaml\") pod \"1f8bb602-e003-4944-adc9-8205fa6aa12a\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.843047 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-combined-ca-bundle\") pod \"1f8bb602-e003-4944-adc9-8205fa6aa12a\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.844565 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67c6c\" (UniqueName: \"kubernetes.io/projected/1f8bb602-e003-4944-adc9-8205fa6aa12a-kube-api-access-67c6c\") pod \"1f8bb602-e003-4944-adc9-8205fa6aa12a\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.844638 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-run-httpd\") pod \"1f8bb602-e003-4944-adc9-8205fa6aa12a\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.844967 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1f8bb602-e003-4944-adc9-8205fa6aa12a" (UID: "1f8bb602-e003-4944-adc9-8205fa6aa12a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.845447 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-ceilometer-tls-certs\") pod \"1f8bb602-e003-4944-adc9-8205fa6aa12a\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.845787 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-config-data\") pod \"1f8bb602-e003-4944-adc9-8205fa6aa12a\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.845952 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-scripts\") pod \"1f8bb602-e003-4944-adc9-8205fa6aa12a\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.846011 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-log-httpd\") pod \"1f8bb602-e003-4944-adc9-8205fa6aa12a\" (UID: \"1f8bb602-e003-4944-adc9-8205fa6aa12a\") " Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.848451 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1f8bb602-e003-4944-adc9-8205fa6aa12a" (UID: "1f8bb602-e003-4944-adc9-8205fa6aa12a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.856718 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.856777 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f8bb602-e003-4944-adc9-8205fa6aa12a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.880954 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8bb602-e003-4944-adc9-8205fa6aa12a-kube-api-access-67c6c" (OuterVolumeSpecName: "kube-api-access-67c6c") pod "1f8bb602-e003-4944-adc9-8205fa6aa12a" (UID: "1f8bb602-e003-4944-adc9-8205fa6aa12a"). InnerVolumeSpecName "kube-api-access-67c6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.894145 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-scripts" (OuterVolumeSpecName: "scripts") pod "1f8bb602-e003-4944-adc9-8205fa6aa12a" (UID: "1f8bb602-e003-4944-adc9-8205fa6aa12a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.972351 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67c6c\" (UniqueName: \"kubernetes.io/projected/1f8bb602-e003-4944-adc9-8205fa6aa12a-kube-api-access-67c6c\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.972488 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:18 crc kubenswrapper[4770]: I1209 14:51:18.995109 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1f8bb602-e003-4944-adc9-8205fa6aa12a" (UID: "1f8bb602-e003-4944-adc9-8205fa6aa12a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.002129 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f8bb602-e003-4944-adc9-8205fa6aa12a" (UID: "1f8bb602-e003-4944-adc9-8205fa6aa12a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.014902 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1f8bb602-e003-4944-adc9-8205fa6aa12a" (UID: "1f8bb602-e003-4944-adc9-8205fa6aa12a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.046738 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-config-data" (OuterVolumeSpecName: "config-data") pod "1f8bb602-e003-4944-adc9-8205fa6aa12a" (UID: "1f8bb602-e003-4944-adc9-8205fa6aa12a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.059339 4770 generic.go:334] "Generic (PLEG): container finished" podID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerID="aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393" exitCode=0 Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.059396 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerDied","Data":"aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393"} Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.059428 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f8bb602-e003-4944-adc9-8205fa6aa12a","Type":"ContainerDied","Data":"f2e0df2f7be4d07899e7a437d1b0aa5aef71387c43c2e5d09f93407f9840f763"} Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.059493 4770 scope.go:117] "RemoveContainer" containerID="10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.059694 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.074303 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.074339 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.074349 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.074358 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8bb602-e003-4944-adc9-8205fa6aa12a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.129773 4770 scope.go:117] "RemoveContainer" containerID="bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.137510 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.194074 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.202046 4770 scope.go:117] "RemoveContainer" containerID="aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210015 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.210490 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="proxy-httpd" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210508 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="proxy-httpd" Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.210519 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="sg-core" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210527 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="sg-core" Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.210548 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="ceilometer-central-agent" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210555 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="ceilometer-central-agent" Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.210572 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="ceilometer-notification-agent" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210577 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="ceilometer-notification-agent" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210812 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="ceilometer-central-agent" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210842 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="ceilometer-notification-agent" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210852 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="sg-core" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.210863 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" containerName="proxy-httpd" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.212909 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.219563 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.221112 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.221585 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.238744 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.258352 4770 scope.go:117] "RemoveContainer" containerID="9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.283003 4770 scope.go:117] "RemoveContainer" containerID="10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87" Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.283952 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87\": container with ID starting with 10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87 not found: ID does not exist" containerID="10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.284033 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87"} err="failed to get container status \"10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87\": rpc error: code = NotFound desc = could not find container \"10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87\": container with ID starting with 10877836a2879daac08f6357a2ee580b172c6340cb322c0fd3e7c82d20068b87 not found: ID does not exist" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.284076 4770 scope.go:117] "RemoveContainer" containerID="bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.284653 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46cbdc7f-5a87-4c97-a56e-910d75b00675-log-httpd\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.284748 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4tpr\" (UniqueName: \"kubernetes.io/projected/46cbdc7f-5a87-4c97-a56e-910d75b00675-kube-api-access-c4tpr\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.284789 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23\": container with ID starting with bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23 not found: ID does not exist" containerID="bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.284897 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23"} err="failed to get container status \"bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23\": rpc error: code = NotFound desc = could not find container \"bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23\": container with ID starting with bc1174ce06cd4373943e80a4d70fe306ac3c2e8ad9414565d804bdf596982e23 not found: ID does not exist" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.284975 4770 scope.go:117] "RemoveContainer" containerID="aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.284853 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.285171 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.285243 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-config-data\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.285328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-scripts\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.285403 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46cbdc7f-5a87-4c97-a56e-910d75b00675-run-httpd\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.285516 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.286797 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393\": container with ID starting with aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393 not found: ID does not exist" containerID="aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.286826 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393"} err="failed to get container status \"aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393\": rpc error: code = NotFound desc = could not find container \"aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393\": container with ID starting with aa313cf1f334b95852a6fbeae315164b9ecf72cf3b8cfff5b059f10c4f9ff393 not found: ID does not exist" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.286845 4770 scope.go:117] "RemoveContainer" containerID="9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5" Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.288440 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5\": container with ID starting with 9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5 not found: ID does not exist" containerID="9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.288479 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5"} err="failed to get container status \"9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5\": rpc error: code = NotFound desc = could not find container \"9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5\": container with ID starting with 9575efe523ff8616d29bd687377aeb6585f3236ce4c36e5586e0c7c8a0341be5 not found: ID does not exist" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.387540 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46cbdc7f-5a87-4c97-a56e-910d75b00675-log-httpd\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.387999 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4tpr\" (UniqueName: \"kubernetes.io/projected/46cbdc7f-5a87-4c97-a56e-910d75b00675-kube-api-access-c4tpr\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.388090 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.388203 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.388291 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-config-data\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.388368 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-scripts\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.388439 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46cbdc7f-5a87-4c97-a56e-910d75b00675-run-httpd\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.388560 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.388100 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46cbdc7f-5a87-4c97-a56e-910d75b00675-log-httpd\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.389267 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46cbdc7f-5a87-4c97-a56e-910d75b00675-run-httpd\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.393773 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.394698 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-config-data\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.399761 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-scripts\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.402694 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.406042 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbdc7f-5a87-4c97-a56e-910d75b00675-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.415512 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4tpr\" (UniqueName: \"kubernetes.io/projected/46cbdc7f-5a87-4c97-a56e-910d75b00675-kube-api-access-c4tpr\") pod \"ceilometer-0\" (UID: \"46cbdc7f-5a87-4c97-a56e-910d75b00675\") " pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.551601 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 14:51:19 crc kubenswrapper[4770]: I1209 14:51:19.589613 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:51:19 crc kubenswrapper[4770]: E1209 14:51:19.590366 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:51:20 crc kubenswrapper[4770]: I1209 14:51:20.242717 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 14:51:20 crc kubenswrapper[4770]: W1209 14:51:20.244509 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46cbdc7f_5a87_4c97_a56e_910d75b00675.slice/crio-a4555bc40c934d84f8a2bf47aad020b41cb7c4363c42d92ad4699a8b39aee190 WatchSource:0}: Error finding container a4555bc40c934d84f8a2bf47aad020b41cb7c4363c42d92ad4699a8b39aee190: Status 404 returned error can't find the container with id a4555bc40c934d84f8a2bf47aad020b41cb7c4363c42d92ad4699a8b39aee190 Dec 09 14:51:20 crc kubenswrapper[4770]: E1209 14:51:20.350551 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:51:20 crc kubenswrapper[4770]: E1209 14:51:20.350611 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:51:20 crc kubenswrapper[4770]: E1209 14:51:20.350772 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:51:20 crc kubenswrapper[4770]: I1209 14:51:20.630034 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8bb602-e003-4944-adc9-8205fa6aa12a" path="/var/lib/kubelet/pods/1f8bb602-e003-4944-adc9-8205fa6aa12a/volumes" Dec 09 14:51:21 crc kubenswrapper[4770]: I1209 14:51:21.108608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46cbdc7f-5a87-4c97-a56e-910d75b00675","Type":"ContainerStarted","Data":"fe0b1f76d55b48d1a2d7addd2d10043d8abc8eb83f7f56a8f21ae7c9f68e687d"} Dec 09 14:51:21 crc kubenswrapper[4770]: I1209 14:51:21.108976 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46cbdc7f-5a87-4c97-a56e-910d75b00675","Type":"ContainerStarted","Data":"a4555bc40c934d84f8a2bf47aad020b41cb7c4363c42d92ad4699a8b39aee190"} Dec 09 14:51:21 crc kubenswrapper[4770]: I1209 14:51:21.652119 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" containerName="rabbitmq" containerID="cri-o://fd0ac14296b88363575acbf86bb4972dc526f2fdb48cee7ef5aa1e33594edf34" gracePeriod=604795 Dec 09 14:51:22 crc kubenswrapper[4770]: I1209 14:51:22.121065 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46cbdc7f-5a87-4c97-a56e-910d75b00675","Type":"ContainerStarted","Data":"8dfd7c15a9b60c3fc8e6b55805b4812cdbde867ee5f100ed8d9d203dbc0bd2ef"} Dec 09 14:51:22 crc kubenswrapper[4770]: I1209 14:51:22.686903 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerName="rabbitmq" containerID="cri-o://769cd2dac8b651775fe4e262a9cfff69636101a7f24c7a371e86957852a08b3f" gracePeriod=604796 Dec 09 14:51:23 crc kubenswrapper[4770]: E1209 14:51:23.529164 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:51:24 crc kubenswrapper[4770]: I1209 14:51:24.143146 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46cbdc7f-5a87-4c97-a56e-910d75b00675","Type":"ContainerStarted","Data":"60454a2eef4da898d3e5f763b288622c3b605252e06ed9f98d46823b3dc675fa"} Dec 09 14:51:24 crc kubenswrapper[4770]: I1209 14:51:24.143402 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 14:51:24 crc kubenswrapper[4770]: E1209 14:51:24.144867 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:51:25 crc kubenswrapper[4770]: E1209 14:51:25.153613 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.184930 4770 generic.go:334] "Generic (PLEG): container finished" podID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" containerID="fd0ac14296b88363575acbf86bb4972dc526f2fdb48cee7ef5aa1e33594edf34" exitCode=0 Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.185550 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469","Type":"ContainerDied","Data":"fd0ac14296b88363575acbf86bb4972dc526f2fdb48cee7ef5aa1e33594edf34"} Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.353557 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.433791 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-599fx\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-kube-api-access-599fx\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434359 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434454 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-plugins-conf\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434563 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-pod-info\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434612 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-erlang-cookie\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434694 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-plugins\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434718 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-config-data\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434805 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-server-conf\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434835 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-confd\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434869 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-erlang-cookie-secret\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.434936 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-tls\") pod \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\" (UID: \"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469\") " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.435860 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.436104 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.436353 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.439667 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.444486 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.448876 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-pod-info" (OuterVolumeSpecName: "pod-info") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.452157 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.460083 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-kube-api-access-599fx" (OuterVolumeSpecName: "kube-api-access-599fx") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "kube-api-access-599fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.485326 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248" (OuterVolumeSpecName: "persistence") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "pvc-95603584-9767-4d13-92d5-e6f21299e248". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.537611 4770 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.538002 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.538017 4770 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.538034 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.538046 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-599fx\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-kube-api-access-599fx\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.538082 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") on node \"crc\" " Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.538095 4770 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.561481 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-server-conf" (OuterVolumeSpecName: "server-conf") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.585914 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-config-data" (OuterVolumeSpecName: "config-data") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.595614 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.596065 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-95603584-9767-4d13-92d5-e6f21299e248" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248") on node "crc" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.641553 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.641610 4770 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.641626 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.660189 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" (UID: "0c3c3035-f6c2-4e3b-a244-f5cc01a7b469"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:28 crc kubenswrapper[4770]: I1209 14:51:28.744562 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.217565 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c3c3035-f6c2-4e3b-a244-f5cc01a7b469","Type":"ContainerDied","Data":"28d462eca7c6f34b8e24f5fd838250f145546551cbd55d7d968a290b473a6374"} Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.217627 4770 scope.go:117] "RemoveContainer" containerID="fd0ac14296b88363575acbf86bb4972dc526f2fdb48cee7ef5aa1e33594edf34" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.217814 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.225588 4770 generic.go:334] "Generic (PLEG): container finished" podID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerID="769cd2dac8b651775fe4e262a9cfff69636101a7f24c7a371e86957852a08b3f" exitCode=0 Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.225846 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31bb1b14-4de1-4586-8bde-d29afdaad6fd","Type":"ContainerDied","Data":"769cd2dac8b651775fe4e262a9cfff69636101a7f24c7a371e86957852a08b3f"} Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.296229 4770 scope.go:117] "RemoveContainer" containerID="d49bf7eb57ed05ebfbd4c93d645ad809edd662e415eb80a9cfd4911f513fdb00" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.303105 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.313131 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.364406 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:51:29 crc kubenswrapper[4770]: E1209 14:51:29.364862 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" containerName="setup-container" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.364876 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" containerName="setup-container" Dec 09 14:51:29 crc kubenswrapper[4770]: E1209 14:51:29.364920 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" containerName="rabbitmq" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.364927 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" containerName="rabbitmq" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.365124 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" containerName="rabbitmq" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.366320 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.372561 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.372767 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4rpf8" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.373003 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.372819 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.372852 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.372877 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.393330 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.426007 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466067 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsz2m\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-kube-api-access-rsz2m\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466121 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466177 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466211 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7caea4cd-eb43-420d-8c5e-835745de19e8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466248 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7caea4cd-eb43-420d-8c5e-835745de19e8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466288 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466365 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466384 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466427 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466457 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.466495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.581894 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.581976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsz2m\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-kube-api-access-rsz2m\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582005 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582076 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582105 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7caea4cd-eb43-420d-8c5e-835745de19e8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582134 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7caea4cd-eb43-420d-8c5e-835745de19e8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582227 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582257 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582308 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.582344 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.584174 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.585532 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.585904 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.592496 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7caea4cd-eb43-420d-8c5e-835745de19e8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.596717 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.597381 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7caea4cd-eb43-420d-8c5e-835745de19e8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.614405 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.614568 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7caea4cd-eb43-420d-8c5e-835745de19e8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.619142 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7caea4cd-eb43-420d-8c5e-835745de19e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.632837 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsz2m\" (UniqueName: \"kubernetes.io/projected/7caea4cd-eb43-420d-8c5e-835745de19e8-kube-api-access-rsz2m\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.643438 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.643483 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ff190b9c014c406df55c6aa1d5eb46bda5c01a4221266cf4574e441f95c466d2/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.716699 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.769320 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95603584-9767-4d13-92d5-e6f21299e248\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95603584-9767-4d13-92d5-e6f21299e248\") pod \"rabbitmq-server-0\" (UID: \"7caea4cd-eb43-420d-8c5e-835745de19e8\") " pod="openstack/rabbitmq-server-0" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.787510 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-confd\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.787604 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-config-data\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.787663 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-tls\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.787758 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-plugins\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.787790 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-erlang-cookie\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.787815 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31bb1b14-4de1-4586-8bde-d29afdaad6fd-pod-info\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.787865 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-server-conf\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.788857 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8tqs\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-kube-api-access-m8tqs\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.789327 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.794329 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31bb1b14-4de1-4586-8bde-d29afdaad6fd-erlang-cookie-secret\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.795195 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.795662 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.795809 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-plugins-conf\") pod \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\" (UID: \"31bb1b14-4de1-4586-8bde-d29afdaad6fd\") " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.808841 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.810391 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-kube-api-access-m8tqs" (OuterVolumeSpecName: "kube-api-access-m8tqs") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "kube-api-access-m8tqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.810988 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bb1b14-4de1-4586-8bde-d29afdaad6fd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.811759 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.811805 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.811821 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8tqs\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-kube-api-access-m8tqs\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.811833 4770 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31bb1b14-4de1-4586-8bde-d29afdaad6fd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.811846 4770 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.820863 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/31bb1b14-4de1-4586-8bde-d29afdaad6fd-pod-info" (OuterVolumeSpecName: "pod-info") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.821145 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.854618 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-config-data" (OuterVolumeSpecName: "config-data") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.855926 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac" (OuterVolumeSpecName: "persistence") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.903683 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-server-conf" (OuterVolumeSpecName: "server-conf") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.913687 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.913742 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.913756 4770 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31bb1b14-4de1-4586-8bde-d29afdaad6fd-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.913766 4770 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31bb1b14-4de1-4586-8bde-d29afdaad6fd-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.913796 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") on node \"crc\" " Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.969486 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "31bb1b14-4de1-4586-8bde-d29afdaad6fd" (UID: "31bb1b14-4de1-4586-8bde-d29afdaad6fd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.973838 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 14:51:29 crc kubenswrapper[4770]: I1209 14:51:29.974050 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac") on node "crc" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.015612 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.015750 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31bb1b14-4de1-4586-8bde-d29afdaad6fd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.075828 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.254712 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31bb1b14-4de1-4586-8bde-d29afdaad6fd","Type":"ContainerDied","Data":"d9d578e145b22c4b38c8c7f35c26471eab8f74da2795077a24e36fa2e3ec176a"} Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.255062 4770 scope.go:117] "RemoveContainer" containerID="769cd2dac8b651775fe4e262a9cfff69636101a7f24c7a371e86957852a08b3f" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.255297 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.273115 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-ndskk"] Dec 09 14:51:30 crc kubenswrapper[4770]: E1209 14:51:30.273513 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerName="rabbitmq" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.273524 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerName="rabbitmq" Dec 09 14:51:30 crc kubenswrapper[4770]: E1209 14:51:30.273556 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerName="setup-container" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.273564 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerName="setup-container" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.273758 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerName="rabbitmq" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.276897 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.282653 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.293577 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-ndskk"] Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.345157 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.345203 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.345237 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.345273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.345294 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-config\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.345409 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.345452 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnqv8\" (UniqueName: \"kubernetes.io/projected/1b059914-62e4-4f34-bcee-4c10cd444e4c-kube-api-access-fnqv8\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.446620 4770 scope.go:117] "RemoveContainer" containerID="0124084ad1afd737a4b241af8de506900eae154a555a4d4a8ce733160ae30b59" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.447543 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.447595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnqv8\" (UniqueName: \"kubernetes.io/projected/1b059914-62e4-4f34-bcee-4c10cd444e4c-kube-api-access-fnqv8\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.447624 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.447644 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.447674 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.447710 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.447797 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-config\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.448701 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.448708 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.452380 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.452510 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.452608 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-config\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.453180 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.454936 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.491603 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnqv8\" (UniqueName: \"kubernetes.io/projected/1b059914-62e4-4f34-bcee-4c10cd444e4c-kube-api-access-fnqv8\") pod \"dnsmasq-dns-dc7c944bf-ndskk\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.508609 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.545343 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.547533 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.549680 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.551859 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5mt88" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.552029 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.552178 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.552281 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.552379 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.552527 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.586064 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.608437 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c3c3035-f6c2-4e3b-a244-f5cc01a7b469" path="/var/lib/kubelet/pods/0c3c3035-f6c2-4e3b-a244-f5cc01a7b469/volumes" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.613452 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" path="/var/lib/kubelet/pods/31bb1b14-4de1-4586-8bde-d29afdaad6fd/volumes" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.654445 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmm2x\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-kube-api-access-zmm2x\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.654510 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.654644 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f777c49a-0725-4856-895f-06add0375093-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.654810 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.655087 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.655166 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f777c49a-0725-4856-895f-06add0375093-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.655229 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.655246 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f777c49a-0725-4856-895f-06add0375093-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.655325 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.655356 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.655384 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f777c49a-0725-4856-895f-06add0375093-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.731252 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758442 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758499 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f777c49a-0725-4856-895f-06add0375093-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758521 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758535 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f777c49a-0725-4856-895f-06add0375093-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758570 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758585 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758604 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f777c49a-0725-4856-895f-06add0375093-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758642 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmm2x\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-kube-api-access-zmm2x\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758661 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758682 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f777c49a-0725-4856-895f-06add0375093-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.758744 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.759572 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f777c49a-0725-4856-895f-06add0375093-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.759802 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f777c49a-0725-4856-895f-06add0375093-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.760044 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.762470 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.763620 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f777c49a-0725-4856-895f-06add0375093-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.763753 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f777c49a-0725-4856-895f-06add0375093-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.763902 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.764963 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.765297 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f777c49a-0725-4856-895f-06add0375093-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.765953 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.766051 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/59eb480a7f90adad28486c2e8d3eef49ac145acc82eca555e619178fb850ca35/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.777940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmm2x\" (UniqueName: \"kubernetes.io/projected/f777c49a-0725-4856-895f-06add0375093-kube-api-access-zmm2x\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.833032 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6079d29-3bb2-4ebe-bef3-369971dd92ac\") pod \"rabbitmq-cell1-server-0\" (UID: \"f777c49a-0725-4856-895f-06add0375093\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.859299 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 14:51:30 crc kubenswrapper[4770]: I1209 14:51:30.883181 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:51:31 crc kubenswrapper[4770]: I1209 14:51:31.210074 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-ndskk"] Dec 09 14:51:31 crc kubenswrapper[4770]: I1209 14:51:31.294890 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" event={"ID":"1b059914-62e4-4f34-bcee-4c10cd444e4c","Type":"ContainerStarted","Data":"22b7c067c50c855f55c922b9a9d261e45bd4cb9f20983f1dc9ecc3f62ae95b95"} Dec 09 14:51:31 crc kubenswrapper[4770]: I1209 14:51:31.296946 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7caea4cd-eb43-420d-8c5e-835745de19e8","Type":"ContainerStarted","Data":"0814d32d5ffaa2e8efa9df066d353e1d493983794ce8f0dfea9f21ca15772ff7"} Dec 09 14:51:31 crc kubenswrapper[4770]: I1209 14:51:31.496804 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 14:51:31 crc kubenswrapper[4770]: E1209 14:51:31.711892 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:51:31 crc kubenswrapper[4770]: E1209 14:51:31.711948 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:51:31 crc kubenswrapper[4770]: E1209 14:51:31.712080 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:51:31 crc kubenswrapper[4770]: E1209 14:51:31.713480 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:51:32 crc kubenswrapper[4770]: I1209 14:51:32.309179 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b059914-62e4-4f34-bcee-4c10cd444e4c" containerID="afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d" exitCode=0 Dec 09 14:51:32 crc kubenswrapper[4770]: I1209 14:51:32.309270 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" event={"ID":"1b059914-62e4-4f34-bcee-4c10cd444e4c","Type":"ContainerDied","Data":"afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d"} Dec 09 14:51:32 crc kubenswrapper[4770]: I1209 14:51:32.311674 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f777c49a-0725-4856-895f-06add0375093","Type":"ContainerStarted","Data":"61cd198e8418c0aa3d9c838860d5910ae0921c59b44d76647098d23e00fe3dc5"} Dec 09 14:51:32 crc kubenswrapper[4770]: I1209 14:51:32.589080 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:51:32 crc kubenswrapper[4770]: E1209 14:51:32.589711 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:51:33 crc kubenswrapper[4770]: I1209 14:51:33.335117 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7caea4cd-eb43-420d-8c5e-835745de19e8","Type":"ContainerStarted","Data":"8ba3b4a6a09d5c2c65057d229a40413a78d983fbb4427c3488a2c21ebf53ca2d"} Dec 09 14:51:33 crc kubenswrapper[4770]: I1209 14:51:33.339116 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" event={"ID":"1b059914-62e4-4f34-bcee-4c10cd444e4c","Type":"ContainerStarted","Data":"889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7"} Dec 09 14:51:33 crc kubenswrapper[4770]: I1209 14:51:33.339258 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:33 crc kubenswrapper[4770]: I1209 14:51:33.382478 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" podStartSLOduration=3.38243981 podStartE2EDuration="3.38243981s" podCreationTimestamp="2025-12-09 14:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:51:33.380120646 +0000 UTC m=+1725.276322782" watchObservedRunningTime="2025-12-09 14:51:33.38243981 +0000 UTC m=+1725.278641946" Dec 09 14:51:34 crc kubenswrapper[4770]: I1209 14:51:34.353413 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f777c49a-0725-4856-895f-06add0375093","Type":"ContainerStarted","Data":"adf8b15b2691ae3048abd6309cb8db39482dc7e55a82c7ba9c4a3de1bf820fd5"} Dec 09 14:51:34 crc kubenswrapper[4770]: I1209 14:51:34.641464 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="31bb1b14-4de1-4586-8bde-d29afdaad6fd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: i/o timeout" Dec 09 14:51:39 crc kubenswrapper[4770]: I1209 14:51:39.594561 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 14:51:39 crc kubenswrapper[4770]: E1209 14:51:39.715326 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:51:39 crc kubenswrapper[4770]: E1209 14:51:39.715644 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:51:39 crc kubenswrapper[4770]: E1209 14:51:39.715815 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:51:39 crc kubenswrapper[4770]: E1209 14:51:39.717115 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:51:40 crc kubenswrapper[4770]: E1209 14:51:40.419430 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:51:40 crc kubenswrapper[4770]: I1209 14:51:40.732902 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:40 crc kubenswrapper[4770]: I1209 14:51:40.819260 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-chhpb"] Dec 09 14:51:40 crc kubenswrapper[4770]: I1209 14:51:40.819598 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dd998c-chhpb" podUID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" containerName="dnsmasq-dns" containerID="cri-o://a26abe63e8b325888effe35c6b7744e2299e3cf1a91158c47f3ba62c52e48fc8" gracePeriod=10 Dec 09 14:51:40 crc kubenswrapper[4770]: I1209 14:51:40.941147 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-bmz6l"] Dec 09 14:51:40 crc kubenswrapper[4770]: I1209 14:51:40.943494 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:40 crc kubenswrapper[4770]: I1209 14:51:40.992757 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-bmz6l"] Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.015535 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5xqc\" (UniqueName: \"kubernetes.io/projected/60261685-a8d1-4122-85a3-42157081385f-kube-api-access-w5xqc\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.015640 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.015697 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.015758 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.015891 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-config\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.015971 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.015992 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.119307 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5xqc\" (UniqueName: \"kubernetes.io/projected/60261685-a8d1-4122-85a3-42157081385f-kube-api-access-w5xqc\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.119458 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.119505 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.119566 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.119694 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-config\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.119816 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.119845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.120477 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.120553 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.120581 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.120784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.120853 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-config\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.121162 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60261685-a8d1-4122-85a3-42157081385f-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.143933 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5xqc\" (UniqueName: \"kubernetes.io/projected/60261685-a8d1-4122-85a3-42157081385f-kube-api-access-w5xqc\") pod \"dnsmasq-dns-c4b758ff5-bmz6l\" (UID: \"60261685-a8d1-4122-85a3-42157081385f\") " pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.334268 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.436583 4770 generic.go:334] "Generic (PLEG): container finished" podID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" containerID="a26abe63e8b325888effe35c6b7744e2299e3cf1a91158c47f3ba62c52e48fc8" exitCode=0 Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.436908 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-chhpb" event={"ID":"8534b0f6-a782-4296-8a6f-f0eefb01d33a","Type":"ContainerDied","Data":"a26abe63e8b325888effe35c6b7744e2299e3cf1a91158c47f3ba62c52e48fc8"} Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.436941 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-chhpb" event={"ID":"8534b0f6-a782-4296-8a6f-f0eefb01d33a","Type":"ContainerDied","Data":"9042ac51e9bc479acad542da911c7740f21f6c8b7015842138f953d25d11e507"} Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.436959 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9042ac51e9bc479acad542da911c7740f21f6c8b7015842138f953d25d11e507" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.483949 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.533116 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-sb\") pod \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.533194 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-swift-storage-0\") pod \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.533314 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-config\") pod \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.533406 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r7pj\" (UniqueName: \"kubernetes.io/projected/8534b0f6-a782-4296-8a6f-f0eefb01d33a-kube-api-access-8r7pj\") pod \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.533532 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-svc\") pod \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.533706 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-nb\") pod \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\" (UID: \"8534b0f6-a782-4296-8a6f-f0eefb01d33a\") " Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.553664 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8534b0f6-a782-4296-8a6f-f0eefb01d33a-kube-api-access-8r7pj" (OuterVolumeSpecName: "kube-api-access-8r7pj") pod "8534b0f6-a782-4296-8a6f-f0eefb01d33a" (UID: "8534b0f6-a782-4296-8a6f-f0eefb01d33a"). InnerVolumeSpecName "kube-api-access-8r7pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.635921 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8534b0f6-a782-4296-8a6f-f0eefb01d33a" (UID: "8534b0f6-a782-4296-8a6f-f0eefb01d33a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.637027 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.637073 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r7pj\" (UniqueName: \"kubernetes.io/projected/8534b0f6-a782-4296-8a6f-f0eefb01d33a-kube-api-access-8r7pj\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.640107 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-config" (OuterVolumeSpecName: "config") pod "8534b0f6-a782-4296-8a6f-f0eefb01d33a" (UID: "8534b0f6-a782-4296-8a6f-f0eefb01d33a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.643153 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8534b0f6-a782-4296-8a6f-f0eefb01d33a" (UID: "8534b0f6-a782-4296-8a6f-f0eefb01d33a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.648821 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8534b0f6-a782-4296-8a6f-f0eefb01d33a" (UID: "8534b0f6-a782-4296-8a6f-f0eefb01d33a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.653799 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8534b0f6-a782-4296-8a6f-f0eefb01d33a" (UID: "8534b0f6-a782-4296-8a6f-f0eefb01d33a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.740268 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.740328 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.740342 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.740355 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8534b0f6-a782-4296-8a6f-f0eefb01d33a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:41 crc kubenswrapper[4770]: I1209 14:51:41.883775 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-bmz6l"] Dec 09 14:51:41 crc kubenswrapper[4770]: W1209 14:51:41.884349 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60261685_a8d1_4122_85a3_42157081385f.slice/crio-7e62e38c28db149fb4f1d854c43b549232a9d9b733004c874e50e8cfd89058bc WatchSource:0}: Error finding container 7e62e38c28db149fb4f1d854c43b549232a9d9b733004c874e50e8cfd89058bc: Status 404 returned error can't find the container with id 7e62e38c28db149fb4f1d854c43b549232a9d9b733004c874e50e8cfd89058bc Dec 09 14:51:42 crc kubenswrapper[4770]: I1209 14:51:42.447977 4770 generic.go:334] "Generic (PLEG): container finished" podID="60261685-a8d1-4122-85a3-42157081385f" containerID="e443e67c4275d8dc89139cb05787094e67fe2edbda6b58bb8b49988fbf4dfd80" exitCode=0 Dec 09 14:51:42 crc kubenswrapper[4770]: I1209 14:51:42.448026 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" event={"ID":"60261685-a8d1-4122-85a3-42157081385f","Type":"ContainerDied","Data":"e443e67c4275d8dc89139cb05787094e67fe2edbda6b58bb8b49988fbf4dfd80"} Dec 09 14:51:42 crc kubenswrapper[4770]: I1209 14:51:42.448535 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" event={"ID":"60261685-a8d1-4122-85a3-42157081385f","Type":"ContainerStarted","Data":"7e62e38c28db149fb4f1d854c43b549232a9d9b733004c874e50e8cfd89058bc"} Dec 09 14:51:42 crc kubenswrapper[4770]: I1209 14:51:42.448644 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-chhpb" Dec 09 14:51:42 crc kubenswrapper[4770]: I1209 14:51:42.661375 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-chhpb"] Dec 09 14:51:42 crc kubenswrapper[4770]: I1209 14:51:42.694835 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-chhpb"] Dec 09 14:51:43 crc kubenswrapper[4770]: I1209 14:51:43.462369 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" event={"ID":"60261685-a8d1-4122-85a3-42157081385f","Type":"ContainerStarted","Data":"e25fc3a7cf0095b0a8c9a1b12c79de9d60fa0701fdb0cffbc6926154adadbcf1"} Dec 09 14:51:43 crc kubenswrapper[4770]: I1209 14:51:43.462992 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:43 crc kubenswrapper[4770]: I1209 14:51:43.516823 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" podStartSLOduration=3.51680055 podStartE2EDuration="3.51680055s" podCreationTimestamp="2025-12-09 14:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:51:43.500540977 +0000 UTC m=+1735.396743113" watchObservedRunningTime="2025-12-09 14:51:43.51680055 +0000 UTC m=+1735.413002686" Dec 09 14:51:44 crc kubenswrapper[4770]: E1209 14:51:44.590627 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:51:44 crc kubenswrapper[4770]: I1209 14:51:44.601378 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" path="/var/lib/kubelet/pods/8534b0f6-a782-4296-8a6f-f0eefb01d33a/volumes" Dec 09 14:51:47 crc kubenswrapper[4770]: I1209 14:51:47.591343 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:51:47 crc kubenswrapper[4770]: E1209 14:51:47.593830 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:51:51 crc kubenswrapper[4770]: I1209 14:51:51.335933 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4b758ff5-bmz6l" Dec 09 14:51:51 crc kubenswrapper[4770]: I1209 14:51:51.433575 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-ndskk"] Dec 09 14:51:51 crc kubenswrapper[4770]: I1209 14:51:51.441136 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" podUID="1b059914-62e4-4f34-bcee-4c10cd444e4c" containerName="dnsmasq-dns" containerID="cri-o://889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7" gracePeriod=10 Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.084928 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.135836 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-sb\") pod \"1b059914-62e4-4f34-bcee-4c10cd444e4c\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.135956 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-swift-storage-0\") pod \"1b059914-62e4-4f34-bcee-4c10cd444e4c\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.136067 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnqv8\" (UniqueName: \"kubernetes.io/projected/1b059914-62e4-4f34-bcee-4c10cd444e4c-kube-api-access-fnqv8\") pod \"1b059914-62e4-4f34-bcee-4c10cd444e4c\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.136139 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-svc\") pod \"1b059914-62e4-4f34-bcee-4c10cd444e4c\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.136239 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-config\") pod \"1b059914-62e4-4f34-bcee-4c10cd444e4c\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.136284 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-openstack-edpm-ipam\") pod \"1b059914-62e4-4f34-bcee-4c10cd444e4c\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.136312 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-nb\") pod \"1b059914-62e4-4f34-bcee-4c10cd444e4c\" (UID: \"1b059914-62e4-4f34-bcee-4c10cd444e4c\") " Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.157090 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b059914-62e4-4f34-bcee-4c10cd444e4c-kube-api-access-fnqv8" (OuterVolumeSpecName: "kube-api-access-fnqv8") pod "1b059914-62e4-4f34-bcee-4c10cd444e4c" (UID: "1b059914-62e4-4f34-bcee-4c10cd444e4c"). InnerVolumeSpecName "kube-api-access-fnqv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.212817 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b059914-62e4-4f34-bcee-4c10cd444e4c" (UID: "1b059914-62e4-4f34-bcee-4c10cd444e4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.214943 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b059914-62e4-4f34-bcee-4c10cd444e4c" (UID: "1b059914-62e4-4f34-bcee-4c10cd444e4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.224318 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1b059914-62e4-4f34-bcee-4c10cd444e4c" (UID: "1b059914-62e4-4f34-bcee-4c10cd444e4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.227505 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b059914-62e4-4f34-bcee-4c10cd444e4c" (UID: "1b059914-62e4-4f34-bcee-4c10cd444e4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.231686 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "1b059914-62e4-4f34-bcee-4c10cd444e4c" (UID: "1b059914-62e4-4f34-bcee-4c10cd444e4c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.241025 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.241056 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.241066 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.241075 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.241086 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnqv8\" (UniqueName: \"kubernetes.io/projected/1b059914-62e4-4f34-bcee-4c10cd444e4c-kube-api-access-fnqv8\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.241096 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.249938 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-config" (OuterVolumeSpecName: "config") pod "1b059914-62e4-4f34-bcee-4c10cd444e4c" (UID: "1b059914-62e4-4f34-bcee-4c10cd444e4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.343184 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b059914-62e4-4f34-bcee-4c10cd444e4c-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.556340 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b059914-62e4-4f34-bcee-4c10cd444e4c" containerID="889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7" exitCode=0 Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.556403 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" event={"ID":"1b059914-62e4-4f34-bcee-4c10cd444e4c","Type":"ContainerDied","Data":"889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7"} Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.556801 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" event={"ID":"1b059914-62e4-4f34-bcee-4c10cd444e4c","Type":"ContainerDied","Data":"22b7c067c50c855f55c922b9a9d261e45bd4cb9f20983f1dc9ecc3f62ae95b95"} Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.556823 4770 scope.go:117] "RemoveContainer" containerID="889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.556414 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-ndskk" Dec 09 14:51:52 crc kubenswrapper[4770]: E1209 14:51:52.590238 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.692023 4770 scope.go:117] "RemoveContainer" containerID="afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.704800 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-ndskk"] Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.713387 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-ndskk"] Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.731914 4770 scope.go:117] "RemoveContainer" containerID="889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7" Dec 09 14:51:52 crc kubenswrapper[4770]: E1209 14:51:52.732521 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7\": container with ID starting with 889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7 not found: ID does not exist" containerID="889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.732559 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7"} err="failed to get container status \"889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7\": rpc error: code = NotFound desc = could not find container \"889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7\": container with ID starting with 889cc47caaf4638ae0c489d21a0b8dc160b5afb5e325a237f98de239c686bfd7 not found: ID does not exist" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.732587 4770 scope.go:117] "RemoveContainer" containerID="afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d" Dec 09 14:51:52 crc kubenswrapper[4770]: E1209 14:51:52.733342 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d\": container with ID starting with afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d not found: ID does not exist" containerID="afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d" Dec 09 14:51:52 crc kubenswrapper[4770]: I1209 14:51:52.733363 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d"} err="failed to get container status \"afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d\": rpc error: code = NotFound desc = could not find container \"afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d\": container with ID starting with afa5d79c152d438ec543ef27a0271246e5f8ec9957332db6750dce11cf37e40d not found: ID does not exist" Dec 09 14:51:54 crc kubenswrapper[4770]: I1209 14:51:54.607306 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b059914-62e4-4f34-bcee-4c10cd444e4c" path="/var/lib/kubelet/pods/1b059914-62e4-4f34-bcee-4c10cd444e4c/volumes" Dec 09 14:51:57 crc kubenswrapper[4770]: E1209 14:51:57.714044 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:51:57 crc kubenswrapper[4770]: E1209 14:51:57.714370 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:51:57 crc kubenswrapper[4770]: E1209 14:51:57.714487 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:51:57 crc kubenswrapper[4770]: E1209 14:51:57.715638 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:51:59 crc kubenswrapper[4770]: I1209 14:51:59.588409 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:51:59 crc kubenswrapper[4770]: E1209 14:51:59.588883 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.683956 4770 generic.go:334] "Generic (PLEG): container finished" podID="7caea4cd-eb43-420d-8c5e-835745de19e8" containerID="8ba3b4a6a09d5c2c65057d229a40413a78d983fbb4427c3488a2c21ebf53ca2d" exitCode=0 Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.684048 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7caea4cd-eb43-420d-8c5e-835745de19e8","Type":"ContainerDied","Data":"8ba3b4a6a09d5c2c65057d229a40413a78d983fbb4427c3488a2c21ebf53ca2d"} Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.821052 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg"] Dec 09 14:52:04 crc kubenswrapper[4770]: E1209 14:52:04.821891 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b059914-62e4-4f34-bcee-4c10cd444e4c" containerName="dnsmasq-dns" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.821913 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b059914-62e4-4f34-bcee-4c10cd444e4c" containerName="dnsmasq-dns" Dec 09 14:52:04 crc kubenswrapper[4770]: E1209 14:52:04.821939 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" containerName="dnsmasq-dns" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.821948 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" containerName="dnsmasq-dns" Dec 09 14:52:04 crc kubenswrapper[4770]: E1209 14:52:04.821993 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b059914-62e4-4f34-bcee-4c10cd444e4c" containerName="init" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.822001 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b059914-62e4-4f34-bcee-4c10cd444e4c" containerName="init" Dec 09 14:52:04 crc kubenswrapper[4770]: E1209 14:52:04.822029 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" containerName="init" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.822038 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" containerName="init" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.822281 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b059914-62e4-4f34-bcee-4c10cd444e4c" containerName="dnsmasq-dns" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.822331 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8534b0f6-a782-4296-8a6f-f0eefb01d33a" containerName="dnsmasq-dns" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.823242 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.827423 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.827565 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.827426 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.829104 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.859378 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg"] Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.948976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.949038 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm4l5\" (UniqueName: \"kubernetes.io/projected/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-kube-api-access-sm4l5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.949185 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:04 crc kubenswrapper[4770]: I1209 14:52:04.949281 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.051434 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.051559 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.051706 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.051753 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm4l5\" (UniqueName: \"kubernetes.io/projected/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-kube-api-access-sm4l5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.064780 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.065385 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.066164 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.092518 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm4l5\" (UniqueName: \"kubernetes.io/projected/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-kube-api-access-sm4l5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.212109 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.699566 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7caea4cd-eb43-420d-8c5e-835745de19e8","Type":"ContainerStarted","Data":"e41d6a9202238334673f3340fa34ebf05bb47992c9af091e85f465038a7c64c3"} Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.701272 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.703450 4770 generic.go:334] "Generic (PLEG): container finished" podID="f777c49a-0725-4856-895f-06add0375093" containerID="adf8b15b2691ae3048abd6309cb8db39482dc7e55a82c7ba9c4a3de1bf820fd5" exitCode=0 Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.703487 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f777c49a-0725-4856-895f-06add0375093","Type":"ContainerDied","Data":"adf8b15b2691ae3048abd6309cb8db39482dc7e55a82c7ba9c4a3de1bf820fd5"} Dec 09 14:52:05 crc kubenswrapper[4770]: E1209 14:52:05.709877 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:52:05 crc kubenswrapper[4770]: E1209 14:52:05.709934 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:52:05 crc kubenswrapper[4770]: E1209 14:52:05.710089 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:52:05 crc kubenswrapper[4770]: E1209 14:52:05.712950 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.740572 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.740552309 podStartE2EDuration="36.740552309s" podCreationTimestamp="2025-12-09 14:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:52:05.736641112 +0000 UTC m=+1757.632843268" watchObservedRunningTime="2025-12-09 14:52:05.740552309 +0000 UTC m=+1757.636754445" Dec 09 14:52:05 crc kubenswrapper[4770]: I1209 14:52:05.817746 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg"] Dec 09 14:52:05 crc kubenswrapper[4770]: W1209 14:52:05.834349 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f455fc3_9b1f_48b0_9527_c9fa301c6b6d.slice/crio-38ff9f72c2006d2811885c1a00a84a107f7a9ecf0a76c6ee242e3ce5c370fe5a WatchSource:0}: Error finding container 38ff9f72c2006d2811885c1a00a84a107f7a9ecf0a76c6ee242e3ce5c370fe5a: Status 404 returned error can't find the container with id 38ff9f72c2006d2811885c1a00a84a107f7a9ecf0a76c6ee242e3ce5c370fe5a Dec 09 14:52:06 crc kubenswrapper[4770]: I1209 14:52:06.719025 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" event={"ID":"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d","Type":"ContainerStarted","Data":"38ff9f72c2006d2811885c1a00a84a107f7a9ecf0a76c6ee242e3ce5c370fe5a"} Dec 09 14:52:06 crc kubenswrapper[4770]: I1209 14:52:06.722282 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f777c49a-0725-4856-895f-06add0375093","Type":"ContainerStarted","Data":"e2af99ce926ff5270d37641e178fc7a4c2fa1408f4be92a774702a5fae6f4a68"} Dec 09 14:52:06 crc kubenswrapper[4770]: I1209 14:52:06.791799 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.791774294 podStartE2EDuration="36.791774294s" podCreationTimestamp="2025-12-09 14:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:52:06.768054706 +0000 UTC m=+1758.664256862" watchObservedRunningTime="2025-12-09 14:52:06.791774294 +0000 UTC m=+1758.687976430" Dec 09 14:52:08 crc kubenswrapper[4770]: E1209 14:52:08.598193 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:52:10 crc kubenswrapper[4770]: I1209 14:52:10.884270 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:52:13 crc kubenswrapper[4770]: I1209 14:52:13.588404 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:52:13 crc kubenswrapper[4770]: E1209 14:52:13.589334 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:52:16 crc kubenswrapper[4770]: E1209 14:52:16.595749 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:52:17 crc kubenswrapper[4770]: I1209 14:52:17.423833 4770 scope.go:117] "RemoveContainer" containerID="abb6e4645de1352b819fd286d6b304c6be978d6421f4a58b87cb3e5a2e2dee2a" Dec 09 14:52:19 crc kubenswrapper[4770]: I1209 14:52:19.931169 4770 scope.go:117] "RemoveContainer" containerID="8ae6be449115a496f5b04e750b3f4f29652f08349b285fe4043bc2c846adf369" Dec 09 14:52:20 crc kubenswrapper[4770]: I1209 14:52:20.079717 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7caea4cd-eb43-420d-8c5e-835745de19e8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.232:5671: connect: connection refused" Dec 09 14:52:20 crc kubenswrapper[4770]: I1209 14:52:20.742217 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" event={"ID":"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d","Type":"ContainerStarted","Data":"07aec957651caf4bdaf0c4b866377932641a390daa6817c621c2ec72ea1cc88b"} Dec 09 14:52:20 crc kubenswrapper[4770]: I1209 14:52:20.765631 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" podStartSLOduration=2.604605507 podStartE2EDuration="16.765603374s" podCreationTimestamp="2025-12-09 14:52:04 +0000 UTC" firstStartedPulling="2025-12-09 14:52:05.838231754 +0000 UTC m=+1757.734433890" lastFinishedPulling="2025-12-09 14:52:19.999229621 +0000 UTC m=+1771.895431757" observedRunningTime="2025-12-09 14:52:20.761946754 +0000 UTC m=+1772.658148890" watchObservedRunningTime="2025-12-09 14:52:20.765603374 +0000 UTC m=+1772.661805550" Dec 09 14:52:20 crc kubenswrapper[4770]: I1209 14:52:20.886950 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 14:52:22 crc kubenswrapper[4770]: E1209 14:52:22.591329 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:52:27 crc kubenswrapper[4770]: E1209 14:52:27.590979 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:52:28 crc kubenswrapper[4770]: I1209 14:52:28.595093 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:52:28 crc kubenswrapper[4770]: E1209 14:52:28.595776 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:52:30 crc kubenswrapper[4770]: I1209 14:52:30.079909 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 14:52:31 crc kubenswrapper[4770]: I1209 14:52:31.855449 4770 generic.go:334] "Generic (PLEG): container finished" podID="0f455fc3-9b1f-48b0-9527-c9fa301c6b6d" containerID="07aec957651caf4bdaf0c4b866377932641a390daa6817c621c2ec72ea1cc88b" exitCode=0 Dec 09 14:52:31 crc kubenswrapper[4770]: I1209 14:52:31.855576 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" event={"ID":"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d","Type":"ContainerDied","Data":"07aec957651caf4bdaf0c4b866377932641a390daa6817c621c2ec72ea1cc88b"} Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.426715 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.505753 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-ssh-key\") pod \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.505912 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-inventory\") pod \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.506020 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm4l5\" (UniqueName: \"kubernetes.io/projected/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-kube-api-access-sm4l5\") pod \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.506061 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-repo-setup-combined-ca-bundle\") pod \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\" (UID: \"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d\") " Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.522610 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-kube-api-access-sm4l5" (OuterVolumeSpecName: "kube-api-access-sm4l5") pod "0f455fc3-9b1f-48b0-9527-c9fa301c6b6d" (UID: "0f455fc3-9b1f-48b0-9527-c9fa301c6b6d"). InnerVolumeSpecName "kube-api-access-sm4l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.524805 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0f455fc3-9b1f-48b0-9527-c9fa301c6b6d" (UID: "0f455fc3-9b1f-48b0-9527-c9fa301c6b6d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.541229 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-inventory" (OuterVolumeSpecName: "inventory") pod "0f455fc3-9b1f-48b0-9527-c9fa301c6b6d" (UID: "0f455fc3-9b1f-48b0-9527-c9fa301c6b6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.541656 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0f455fc3-9b1f-48b0-9527-c9fa301c6b6d" (UID: "0f455fc3-9b1f-48b0-9527-c9fa301c6b6d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.608328 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.608358 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.608368 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm4l5\" (UniqueName: \"kubernetes.io/projected/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-kube-api-access-sm4l5\") on node \"crc\" DevicePath \"\"" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.608377 4770 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f455fc3-9b1f-48b0-9527-c9fa301c6b6d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.876151 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" event={"ID":"0f455fc3-9b1f-48b0-9527-c9fa301c6b6d","Type":"ContainerDied","Data":"38ff9f72c2006d2811885c1a00a84a107f7a9ecf0a76c6ee242e3ce5c370fe5a"} Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.876508 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ff9f72c2006d2811885c1a00a84a107f7a9ecf0a76c6ee242e3ce5c370fe5a" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.876245 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.957986 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq"] Dec 09 14:52:33 crc kubenswrapper[4770]: E1209 14:52:33.958548 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f455fc3-9b1f-48b0-9527-c9fa301c6b6d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.958573 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f455fc3-9b1f-48b0-9527-c9fa301c6b6d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.958890 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f455fc3-9b1f-48b0-9527-c9fa301c6b6d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.959984 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.969514 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq"] Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.998797 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.999018 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.999277 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 14:52:33 crc kubenswrapper[4770]: I1209 14:52:33.999412 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.015903 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.016040 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdk2p\" (UniqueName: \"kubernetes.io/projected/ed33ffb9-0111-411d-bcf6-8f072d236a17-kube-api-access-xdk2p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.016174 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.118140 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.118259 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdk2p\" (UniqueName: \"kubernetes.io/projected/ed33ffb9-0111-411d-bcf6-8f072d236a17-kube-api-access-xdk2p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.118370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.125477 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.132307 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.144074 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdk2p\" (UniqueName: \"kubernetes.io/projected/ed33ffb9-0111-411d-bcf6-8f072d236a17-kube-api-access-xdk2p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-srdrq\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.320331 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:34 crc kubenswrapper[4770]: E1209 14:52:34.590877 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:52:34 crc kubenswrapper[4770]: W1209 14:52:34.870534 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded33ffb9_0111_411d_bcf6_8f072d236a17.slice/crio-aa16e5825242fe0d8152c81599402b8bfd94d8a84f9d7c1a47ab938c9427a8d4 WatchSource:0}: Error finding container aa16e5825242fe0d8152c81599402b8bfd94d8a84f9d7c1a47ab938c9427a8d4: Status 404 returned error can't find the container with id aa16e5825242fe0d8152c81599402b8bfd94d8a84f9d7c1a47ab938c9427a8d4 Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.879429 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq"] Dec 09 14:52:34 crc kubenswrapper[4770]: I1209 14:52:34.889347 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" event={"ID":"ed33ffb9-0111-411d-bcf6-8f072d236a17","Type":"ContainerStarted","Data":"aa16e5825242fe0d8152c81599402b8bfd94d8a84f9d7c1a47ab938c9427a8d4"} Dec 09 14:52:36 crc kubenswrapper[4770]: I1209 14:52:36.915227 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" event={"ID":"ed33ffb9-0111-411d-bcf6-8f072d236a17","Type":"ContainerStarted","Data":"2396acd5dcf0eac41d97dc1eba35468297d12a3b682acf3a988fc82b237f7507"} Dec 09 14:52:36 crc kubenswrapper[4770]: I1209 14:52:36.936812 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" podStartSLOduration=3.075412961 podStartE2EDuration="3.936792465s" podCreationTimestamp="2025-12-09 14:52:33 +0000 UTC" firstStartedPulling="2025-12-09 14:52:34.873001259 +0000 UTC m=+1786.769203395" lastFinishedPulling="2025-12-09 14:52:35.734380763 +0000 UTC m=+1787.630582899" observedRunningTime="2025-12-09 14:52:36.933606707 +0000 UTC m=+1788.829808853" watchObservedRunningTime="2025-12-09 14:52:36.936792465 +0000 UTC m=+1788.832994601" Dec 09 14:52:38 crc kubenswrapper[4770]: I1209 14:52:38.938469 4770 generic.go:334] "Generic (PLEG): container finished" podID="ed33ffb9-0111-411d-bcf6-8f072d236a17" containerID="2396acd5dcf0eac41d97dc1eba35468297d12a3b682acf3a988fc82b237f7507" exitCode=0 Dec 09 14:52:38 crc kubenswrapper[4770]: I1209 14:52:38.938691 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" event={"ID":"ed33ffb9-0111-411d-bcf6-8f072d236a17","Type":"ContainerDied","Data":"2396acd5dcf0eac41d97dc1eba35468297d12a3b682acf3a988fc82b237f7507"} Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.535684 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.588964 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:52:40 crc kubenswrapper[4770]: E1209 14:52:40.589309 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.653910 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-ssh-key\") pod \"ed33ffb9-0111-411d-bcf6-8f072d236a17\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.653960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdk2p\" (UniqueName: \"kubernetes.io/projected/ed33ffb9-0111-411d-bcf6-8f072d236a17-kube-api-access-xdk2p\") pod \"ed33ffb9-0111-411d-bcf6-8f072d236a17\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.654089 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-inventory\") pod \"ed33ffb9-0111-411d-bcf6-8f072d236a17\" (UID: \"ed33ffb9-0111-411d-bcf6-8f072d236a17\") " Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.659560 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed33ffb9-0111-411d-bcf6-8f072d236a17-kube-api-access-xdk2p" (OuterVolumeSpecName: "kube-api-access-xdk2p") pod "ed33ffb9-0111-411d-bcf6-8f072d236a17" (UID: "ed33ffb9-0111-411d-bcf6-8f072d236a17"). InnerVolumeSpecName "kube-api-access-xdk2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.690381 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-inventory" (OuterVolumeSpecName: "inventory") pod "ed33ffb9-0111-411d-bcf6-8f072d236a17" (UID: "ed33ffb9-0111-411d-bcf6-8f072d236a17"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.720683 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ed33ffb9-0111-411d-bcf6-8f072d236a17" (UID: "ed33ffb9-0111-411d-bcf6-8f072d236a17"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.756465 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.756503 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdk2p\" (UniqueName: \"kubernetes.io/projected/ed33ffb9-0111-411d-bcf6-8f072d236a17-kube-api-access-xdk2p\") on node \"crc\" DevicePath \"\"" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.756535 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed33ffb9-0111-411d-bcf6-8f072d236a17-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.964747 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" event={"ID":"ed33ffb9-0111-411d-bcf6-8f072d236a17","Type":"ContainerDied","Data":"aa16e5825242fe0d8152c81599402b8bfd94d8a84f9d7c1a47ab938c9427a8d4"} Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.965057 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa16e5825242fe0d8152c81599402b8bfd94d8a84f9d7c1a47ab938c9427a8d4" Dec 09 14:52:40 crc kubenswrapper[4770]: I1209 14:52:40.964921 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-srdrq" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.639303 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz"] Dec 09 14:52:41 crc kubenswrapper[4770]: E1209 14:52:41.640691 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed33ffb9-0111-411d-bcf6-8f072d236a17" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.640809 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed33ffb9-0111-411d-bcf6-8f072d236a17" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.641147 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed33ffb9-0111-411d-bcf6-8f072d236a17" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.642467 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.644662 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.645108 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.645249 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.645472 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.653530 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz"] Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.787095 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.787239 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.787304 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.787331 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwb2\" (UniqueName: \"kubernetes.io/projected/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-kube-api-access-5vwb2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.888770 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.888826 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vwb2\" (UniqueName: \"kubernetes.io/projected/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-kube-api-access-5vwb2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.888919 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.888999 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.899713 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.903557 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.906298 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.923148 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vwb2\" (UniqueName: \"kubernetes.io/projected/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-kube-api-access-5vwb2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:41 crc kubenswrapper[4770]: I1209 14:52:41.974225 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:52:42 crc kubenswrapper[4770]: E1209 14:52:42.593363 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:52:42 crc kubenswrapper[4770]: I1209 14:52:42.608347 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz"] Dec 09 14:52:42 crc kubenswrapper[4770]: I1209 14:52:42.986820 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" event={"ID":"0efe3c6a-6cd4-4b70-9929-b207e4aecee3","Type":"ContainerStarted","Data":"0888c1dce3fc2894a2913d51195a9c545b5ca619b3bfdf18ebeb6454701d2563"} Dec 09 14:52:43 crc kubenswrapper[4770]: I1209 14:52:43.996360 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" event={"ID":"0efe3c6a-6cd4-4b70-9929-b207e4aecee3","Type":"ContainerStarted","Data":"952475c330f27b91d3a80ebe454cbe7c3cd47c95c45e76b978de197f327fee73"} Dec 09 14:52:44 crc kubenswrapper[4770]: I1209 14:52:44.018241 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" podStartSLOduration=2.610166184 podStartE2EDuration="3.018221178s" podCreationTimestamp="2025-12-09 14:52:41 +0000 UTC" firstStartedPulling="2025-12-09 14:52:42.58975269 +0000 UTC m=+1794.485954846" lastFinishedPulling="2025-12-09 14:52:42.997807704 +0000 UTC m=+1794.894009840" observedRunningTime="2025-12-09 14:52:44.010525849 +0000 UTC m=+1795.906728005" watchObservedRunningTime="2025-12-09 14:52:44.018221178 +0000 UTC m=+1795.914423304" Dec 09 14:52:48 crc kubenswrapper[4770]: E1209 14:52:48.721929 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:52:48 crc kubenswrapper[4770]: E1209 14:52:48.722514 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:52:48 crc kubenswrapper[4770]: E1209 14:52:48.722658 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:52:48 crc kubenswrapper[4770]: E1209 14:52:48.723842 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:52:54 crc kubenswrapper[4770]: I1209 14:52:54.588467 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:52:54 crc kubenswrapper[4770]: E1209 14:52:54.589413 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:52:56 crc kubenswrapper[4770]: E1209 14:52:56.693134 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:52:56 crc kubenswrapper[4770]: E1209 14:52:56.693522 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:52:56 crc kubenswrapper[4770]: E1209 14:52:56.693720 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:52:56 crc kubenswrapper[4770]: E1209 14:52:56.695085 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:53:01 crc kubenswrapper[4770]: E1209 14:53:01.590257 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:53:09 crc kubenswrapper[4770]: I1209 14:53:09.588389 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:53:09 crc kubenswrapper[4770]: E1209 14:53:09.589654 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:53:12 crc kubenswrapper[4770]: E1209 14:53:12.594002 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:53:12 crc kubenswrapper[4770]: E1209 14:53:12.594229 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:53:20 crc kubenswrapper[4770]: I1209 14:53:20.165609 4770 scope.go:117] "RemoveContainer" containerID="54bac9614ea0d105bc38a11270fb1baa50c4b88f058010891af0698bff023f15" Dec 09 14:53:20 crc kubenswrapper[4770]: I1209 14:53:20.199542 4770 scope.go:117] "RemoveContainer" containerID="60e2e25dc5a318fe2855a16e28d2824b691fe80ee67a134d1f776a37731058e3" Dec 09 14:53:23 crc kubenswrapper[4770]: I1209 14:53:23.589525 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:53:23 crc kubenswrapper[4770]: E1209 14:53:23.590233 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:53:23 crc kubenswrapper[4770]: E1209 14:53:23.592395 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:53:25 crc kubenswrapper[4770]: E1209 14:53:25.590690 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:53:34 crc kubenswrapper[4770]: E1209 14:53:34.590673 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:53:35 crc kubenswrapper[4770]: I1209 14:53:35.588626 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:53:35 crc kubenswrapper[4770]: E1209 14:53:35.589205 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:53:38 crc kubenswrapper[4770]: E1209 14:53:38.597382 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:53:47 crc kubenswrapper[4770]: I1209 14:53:47.589010 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:53:47 crc kubenswrapper[4770]: E1209 14:53:47.590120 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:53:47 crc kubenswrapper[4770]: E1209 14:53:47.590927 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:53:52 crc kubenswrapper[4770]: E1209 14:53:52.592660 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:53:59 crc kubenswrapper[4770]: I1209 14:53:59.589241 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:53:59 crc kubenswrapper[4770]: E1209 14:53:59.590304 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:53:59 crc kubenswrapper[4770]: E1209 14:53:59.590823 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:54:04 crc kubenswrapper[4770]: E1209 14:54:04.590821 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:54:10 crc kubenswrapper[4770]: I1209 14:54:10.589102 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:54:10 crc kubenswrapper[4770]: E1209 14:54:10.591714 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:54:14 crc kubenswrapper[4770]: E1209 14:54:14.718052 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:54:14 crc kubenswrapper[4770]: E1209 14:54:14.718822 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:54:14 crc kubenswrapper[4770]: E1209 14:54:14.718990 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:54:14 crc kubenswrapper[4770]: E1209 14:54:14.720236 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:54:19 crc kubenswrapper[4770]: E1209 14:54:19.732846 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:54:19 crc kubenswrapper[4770]: E1209 14:54:19.733355 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:54:19 crc kubenswrapper[4770]: E1209 14:54:19.733489 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:54:19 crc kubenswrapper[4770]: E1209 14:54:19.734810 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:54:20 crc kubenswrapper[4770]: I1209 14:54:20.296902 4770 scope.go:117] "RemoveContainer" containerID="d64c4f4bca7327a10af13f66da55bb6ae201727cf953df708f837e0c525fe72f" Dec 09 14:54:23 crc kubenswrapper[4770]: I1209 14:54:23.589005 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:54:23 crc kubenswrapper[4770]: E1209 14:54:23.589873 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:54:29 crc kubenswrapper[4770]: E1209 14:54:29.590808 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:54:31 crc kubenswrapper[4770]: E1209 14:54:31.649745 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:54:36 crc kubenswrapper[4770]: I1209 14:54:36.589029 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:54:36 crc kubenswrapper[4770]: E1209 14:54:36.590110 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 14:54:40 crc kubenswrapper[4770]: E1209 14:54:40.590849 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:54:47 crc kubenswrapper[4770]: E1209 14:54:47.591032 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:54:49 crc kubenswrapper[4770]: I1209 14:54:49.588080 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:54:50 crc kubenswrapper[4770]: I1209 14:54:50.605772 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"f547c2f2a6f46a3f60619acf50fcf46bf0865aeaee5c1b464656d2881b7b9a43"} Dec 09 14:54:52 crc kubenswrapper[4770]: E1209 14:54:52.590405 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:55:01 crc kubenswrapper[4770]: E1209 14:55:01.594714 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:55:03 crc kubenswrapper[4770]: E1209 14:55:03.591129 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:55:08 crc kubenswrapper[4770]: I1209 14:55:08.050616 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2215-account-create-update-wnhj4"] Dec 09 14:55:08 crc kubenswrapper[4770]: I1209 14:55:08.088454 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-k7qqf"] Dec 09 14:55:08 crc kubenswrapper[4770]: I1209 14:55:08.108429 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2215-account-create-update-wnhj4"] Dec 09 14:55:08 crc kubenswrapper[4770]: I1209 14:55:08.122686 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-k7qqf"] Dec 09 14:55:08 crc kubenswrapper[4770]: I1209 14:55:08.661363 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54c3930-0be3-4634-b250-921a22df3263" path="/var/lib/kubelet/pods/d54c3930-0be3-4634-b250-921a22df3263/volumes" Dec 09 14:55:08 crc kubenswrapper[4770]: I1209 14:55:08.666640 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf370f4-55f9-49a5-86a8-751c2b6ff94d" path="/var/lib/kubelet/pods/faf370f4-55f9-49a5-86a8-751c2b6ff94d/volumes" Dec 09 14:55:09 crc kubenswrapper[4770]: I1209 14:55:09.035480 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-695f-account-create-update-kfmvh"] Dec 09 14:55:09 crc kubenswrapper[4770]: I1209 14:55:09.047330 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-695f-account-create-update-kfmvh"] Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.038415 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tdpdx"] Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.054484 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-54p89"] Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.064483 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2aa9-account-create-update-9b55f"] Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.074121 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2aa9-account-create-update-9b55f"] Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.083232 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tdpdx"] Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.092814 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-54p89"] Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.601890 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11c8e0c9-b0a4-4dde-a79a-db5fa371e37c" path="/var/lib/kubelet/pods/11c8e0c9-b0a4-4dde-a79a-db5fa371e37c/volumes" Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.603592 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29bceb3d-b7f4-43f6-bb98-834984940e5b" path="/var/lib/kubelet/pods/29bceb3d-b7f4-43f6-bb98-834984940e5b/volumes" Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.605027 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="491157c5-64af-4a59-8e9f-59695e2d7b6c" path="/var/lib/kubelet/pods/491157c5-64af-4a59-8e9f-59695e2d7b6c/volumes" Dec 09 14:55:10 crc kubenswrapper[4770]: I1209 14:55:10.605901 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c7c967-d86d-4651-adf0-c7e2bc3eb428" path="/var/lib/kubelet/pods/82c7c967-d86d-4651-adf0-c7e2bc3eb428/volumes" Dec 09 14:55:14 crc kubenswrapper[4770]: E1209 14:55:14.592138 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:55:15 crc kubenswrapper[4770]: E1209 14:55:15.589361 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:55:20 crc kubenswrapper[4770]: I1209 14:55:20.363143 4770 scope.go:117] "RemoveContainer" containerID="8ce09b6881e3c0c16c007d1a1c573106fc46a25f30c41ec8f657cc3d0f29011f" Dec 09 14:55:20 crc kubenswrapper[4770]: I1209 14:55:20.403801 4770 scope.go:117] "RemoveContainer" containerID="42e27446d2328b3528785983be28da151de673b32df703921db257ccb5e30920" Dec 09 14:55:20 crc kubenswrapper[4770]: I1209 14:55:20.442719 4770 scope.go:117] "RemoveContainer" containerID="7e60a59376694e1847b9543a778045b77d4cc3a3743f8bfa2d638514e5ceb6d8" Dec 09 14:55:20 crc kubenswrapper[4770]: I1209 14:55:20.499266 4770 scope.go:117] "RemoveContainer" containerID="e4ba7f180a93ca8c3d8d545b39346716c5e28825b22a4f8aafdd43d4268cf142" Dec 09 14:55:20 crc kubenswrapper[4770]: I1209 14:55:20.551306 4770 scope.go:117] "RemoveContainer" containerID="e40961b4fca2b0e643e0591d89c5f0cd204691d8585c0aa54352af0043056fdb" Dec 09 14:55:20 crc kubenswrapper[4770]: I1209 14:55:20.597370 4770 scope.go:117] "RemoveContainer" containerID="f6467f06d4bbdd125436c8d1274f12324c97dc5763335c7143ce89133fb7c9a4" Dec 09 14:55:27 crc kubenswrapper[4770]: E1209 14:55:27.591179 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:55:27 crc kubenswrapper[4770]: E1209 14:55:27.591179 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:55:38 crc kubenswrapper[4770]: E1209 14:55:38.598743 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:55:39 crc kubenswrapper[4770]: E1209 14:55:39.590118 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:55:45 crc kubenswrapper[4770]: I1209 14:55:45.044205 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2bfb-account-create-update-jsvdf"] Dec 09 14:55:45 crc kubenswrapper[4770]: I1209 14:55:45.057856 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-769fn"] Dec 09 14:55:45 crc kubenswrapper[4770]: I1209 14:55:45.072651 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2bfb-account-create-update-jsvdf"] Dec 09 14:55:45 crc kubenswrapper[4770]: I1209 14:55:45.086993 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-769fn"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.038673 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a98f-account-create-update-8gx6q"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.058575 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ckkhz"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.070245 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ckkhz"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.080099 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-xcbxz"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.089866 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a98f-account-create-update-8gx6q"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.099988 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8111-account-create-update-9hcm5"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.110152 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-xcbxz"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.119370 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8111-account-create-update-9hcm5"] Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.603173 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23bb07cc-d85a-4cbd-a8ed-429d01349e74" path="/var/lib/kubelet/pods/23bb07cc-d85a-4cbd-a8ed-429d01349e74/volumes" Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.604531 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="508799c0-ff24-43d1-b131-cf60b96facfd" path="/var/lib/kubelet/pods/508799c0-ff24-43d1-b131-cf60b96facfd/volumes" Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.605860 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95983696-25bb-4b13-8a96-59b7af59dda4" path="/var/lib/kubelet/pods/95983696-25bb-4b13-8a96-59b7af59dda4/volumes" Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.610332 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33fdb1f-6ef8-479e-aa72-0f14af285ad7" path="/var/lib/kubelet/pods/c33fdb1f-6ef8-479e-aa72-0f14af285ad7/volumes" Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.611106 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1a70f2-cfcc-40ad-bbd4-6f307928b20f" path="/var/lib/kubelet/pods/cf1a70f2-cfcc-40ad-bbd4-6f307928b20f/volumes" Dec 09 14:55:46 crc kubenswrapper[4770]: I1209 14:55:46.611979 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec87ce87-2276-4c01-bf53-60236cda1e26" path="/var/lib/kubelet/pods/ec87ce87-2276-4c01-bf53-60236cda1e26/volumes" Dec 09 14:55:47 crc kubenswrapper[4770]: I1209 14:55:47.032438 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9qxdb"] Dec 09 14:55:47 crc kubenswrapper[4770]: I1209 14:55:47.044490 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-48e7-account-create-update-lwm9s"] Dec 09 14:55:47 crc kubenswrapper[4770]: I1209 14:55:47.054674 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9qxdb"] Dec 09 14:55:47 crc kubenswrapper[4770]: I1209 14:55:47.065168 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-48e7-account-create-update-lwm9s"] Dec 09 14:55:48 crc kubenswrapper[4770]: I1209 14:55:48.602902 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39626459-4da7-4aae-b9b4-187a18cd1c3e" path="/var/lib/kubelet/pods/39626459-4da7-4aae-b9b4-187a18cd1c3e/volumes" Dec 09 14:55:48 crc kubenswrapper[4770]: I1209 14:55:48.604324 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad" path="/var/lib/kubelet/pods/7161fd66-acfd-4e05-8b62-5b1f0bc1c7ad/volumes" Dec 09 14:55:51 crc kubenswrapper[4770]: E1209 14:55:51.591716 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:55:53 crc kubenswrapper[4770]: E1209 14:55:53.590927 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:55:58 crc kubenswrapper[4770]: I1209 14:55:58.496339 4770 generic.go:334] "Generic (PLEG): container finished" podID="0efe3c6a-6cd4-4b70-9929-b207e4aecee3" containerID="952475c330f27b91d3a80ebe454cbe7c3cd47c95c45e76b978de197f327fee73" exitCode=0 Dec 09 14:55:58 crc kubenswrapper[4770]: I1209 14:55:58.496412 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" event={"ID":"0efe3c6a-6cd4-4b70-9929-b207e4aecee3","Type":"ContainerDied","Data":"952475c330f27b91d3a80ebe454cbe7c3cd47c95c45e76b978de197f327fee73"} Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.043627 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.097579 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-inventory\") pod \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.097648 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-bootstrap-combined-ca-bundle\") pod \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.097792 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-ssh-key\") pod \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.097841 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vwb2\" (UniqueName: \"kubernetes.io/projected/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-kube-api-access-5vwb2\") pod \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\" (UID: \"0efe3c6a-6cd4-4b70-9929-b207e4aecee3\") " Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.107453 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-kube-api-access-5vwb2" (OuterVolumeSpecName: "kube-api-access-5vwb2") pod "0efe3c6a-6cd4-4b70-9929-b207e4aecee3" (UID: "0efe3c6a-6cd4-4b70-9929-b207e4aecee3"). InnerVolumeSpecName "kube-api-access-5vwb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.108871 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0efe3c6a-6cd4-4b70-9929-b207e4aecee3" (UID: "0efe3c6a-6cd4-4b70-9929-b207e4aecee3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.141218 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-inventory" (OuterVolumeSpecName: "inventory") pod "0efe3c6a-6cd4-4b70-9929-b207e4aecee3" (UID: "0efe3c6a-6cd4-4b70-9929-b207e4aecee3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.150910 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0efe3c6a-6cd4-4b70-9929-b207e4aecee3" (UID: "0efe3c6a-6cd4-4b70-9929-b207e4aecee3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.200130 4770 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.200380 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.200440 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vwb2\" (UniqueName: \"kubernetes.io/projected/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-kube-api-access-5vwb2\") on node \"crc\" DevicePath \"\"" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.200545 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0efe3c6a-6cd4-4b70-9929-b207e4aecee3-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.522382 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" event={"ID":"0efe3c6a-6cd4-4b70-9929-b207e4aecee3","Type":"ContainerDied","Data":"0888c1dce3fc2894a2913d51195a9c545b5ca619b3bfdf18ebeb6454701d2563"} Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.522426 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0888c1dce3fc2894a2913d51195a9c545b5ca619b3bfdf18ebeb6454701d2563" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.522484 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.652564 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh"] Dec 09 14:56:00 crc kubenswrapper[4770]: E1209 14:56:00.653521 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efe3c6a-6cd4-4b70-9929-b207e4aecee3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.653567 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efe3c6a-6cd4-4b70-9929-b207e4aecee3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.654217 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efe3c6a-6cd4-4b70-9929-b207e4aecee3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.655521 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.658123 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.659554 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.660008 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.660165 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.675470 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh"] Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.710740 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hl2\" (UniqueName: \"kubernetes.io/projected/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-kube-api-access-d5hl2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.710823 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.710873 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.813243 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hl2\" (UniqueName: \"kubernetes.io/projected/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-kube-api-access-d5hl2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.813315 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.813368 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.821251 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.821846 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.836423 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hl2\" (UniqueName: \"kubernetes.io/projected/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-kube-api-access-d5hl2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:00 crc kubenswrapper[4770]: I1209 14:56:00.978836 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 14:56:01 crc kubenswrapper[4770]: I1209 14:56:01.531894 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh"] Dec 09 14:56:02 crc kubenswrapper[4770]: I1209 14:56:02.540877 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" event={"ID":"145ea2d4-9119-435e-aac0-ac0ee9eb29bf","Type":"ContainerStarted","Data":"8a6e95b468316cc8173d8805eac4aaa91fc6ff18ff661692eeefd2e026c57bcb"} Dec 09 14:56:02 crc kubenswrapper[4770]: I1209 14:56:02.541200 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" event={"ID":"145ea2d4-9119-435e-aac0-ac0ee9eb29bf","Type":"ContainerStarted","Data":"d4b134a770195fcd4f044f84fdf6a1620359d87ab64355403d3ca1e7679879d7"} Dec 09 14:56:03 crc kubenswrapper[4770]: I1209 14:56:03.585707 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" podStartSLOduration=3.138124499 podStartE2EDuration="3.585669219s" podCreationTimestamp="2025-12-09 14:56:00 +0000 UTC" firstStartedPulling="2025-12-09 14:56:01.539010545 +0000 UTC m=+1993.435212681" lastFinishedPulling="2025-12-09 14:56:01.986555265 +0000 UTC m=+1993.882757401" observedRunningTime="2025-12-09 14:56:03.582624227 +0000 UTC m=+1995.478826403" watchObservedRunningTime="2025-12-09 14:56:03.585669219 +0000 UTC m=+1995.481871355" Dec 09 14:56:04 crc kubenswrapper[4770]: E1209 14:56:04.590665 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:56:06 crc kubenswrapper[4770]: E1209 14:56:06.591233 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:56:19 crc kubenswrapper[4770]: I1209 14:56:19.045174 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xn76l"] Dec 09 14:56:19 crc kubenswrapper[4770]: I1209 14:56:19.055156 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xn76l"] Dec 09 14:56:19 crc kubenswrapper[4770]: E1209 14:56:19.591087 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:56:20 crc kubenswrapper[4770]: E1209 14:56:20.590969 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:56:20 crc kubenswrapper[4770]: I1209 14:56:20.601678 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ed519f-b86b-4baf-a731-ffe46bc15641" path="/var/lib/kubelet/pods/a0ed519f-b86b-4baf-a731-ffe46bc15641/volumes" Dec 09 14:56:20 crc kubenswrapper[4770]: I1209 14:56:20.779824 4770 scope.go:117] "RemoveContainer" containerID="bbde407fdce04eed1e604f67c5e300a6ffa78359d7698b7a0cae2c1b5cf47e04" Dec 09 14:56:20 crc kubenswrapper[4770]: I1209 14:56:20.819396 4770 scope.go:117] "RemoveContainer" containerID="8e3925ed987f379c889a2f874a4f12381d8e0c385c89e8d5317e39e3cbc9c33d" Dec 09 14:56:20 crc kubenswrapper[4770]: I1209 14:56:20.892902 4770 scope.go:117] "RemoveContainer" containerID="d962e989cadf609eada02d4d1fee0ad48c84499a9e4a0afddae583a9363f440f" Dec 09 14:56:20 crc kubenswrapper[4770]: I1209 14:56:20.932922 4770 scope.go:117] "RemoveContainer" containerID="e40194ceef5a19e9dfc63b26ff21214ef8ea505ebe09c22e62a297aaea4870fb" Dec 09 14:56:20 crc kubenswrapper[4770]: I1209 14:56:20.999085 4770 scope.go:117] "RemoveContainer" containerID="7f4d514f5a6cf63cd7a85ded2ca0593e944d1664a01f9e5a0655b901f7e378ad" Dec 09 14:56:21 crc kubenswrapper[4770]: I1209 14:56:21.091529 4770 scope.go:117] "RemoveContainer" containerID="a26abe63e8b325888effe35c6b7744e2299e3cf1a91158c47f3ba62c52e48fc8" Dec 09 14:56:21 crc kubenswrapper[4770]: I1209 14:56:21.113968 4770 scope.go:117] "RemoveContainer" containerID="5b85f319e49cb8e18272397cdb391b79af9b7da060e009d5224d66ddeab41df0" Dec 09 14:56:21 crc kubenswrapper[4770]: I1209 14:56:21.151092 4770 scope.go:117] "RemoveContainer" containerID="6a68f4d56e77125cce1fe7b05e9971c80fc5ff934d4f6bc8be393b4185728934" Dec 09 14:56:21 crc kubenswrapper[4770]: I1209 14:56:21.175833 4770 scope.go:117] "RemoveContainer" containerID="6e5142a8d1165bca5f6522ba9d0ac1c51ef242a5bf00dbef9bddf57700453021" Dec 09 14:56:21 crc kubenswrapper[4770]: I1209 14:56:21.208910 4770 scope.go:117] "RemoveContainer" containerID="0ea7c794b898b63759d7c43611eacc61fd1a73a1c6ef28fb65e6aa042e98b1c5" Dec 09 14:56:21 crc kubenswrapper[4770]: I1209 14:56:21.228971 4770 scope.go:117] "RemoveContainer" containerID="3088c278894c834a21b82b2c59e3c57378aa4f64c913bf334ee5b2744bbe026b" Dec 09 14:56:32 crc kubenswrapper[4770]: E1209 14:56:32.591074 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:56:33 crc kubenswrapper[4770]: E1209 14:56:33.590173 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:56:45 crc kubenswrapper[4770]: E1209 14:56:45.590453 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:56:46 crc kubenswrapper[4770]: E1209 14:56:46.593824 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:56:59 crc kubenswrapper[4770]: E1209 14:56:59.596851 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:56:59 crc kubenswrapper[4770]: I1209 14:56:59.597428 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:56:59 crc kubenswrapper[4770]: E1209 14:56:59.710654 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:56:59 crc kubenswrapper[4770]: E1209 14:56:59.711069 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 14:56:59 crc kubenswrapper[4770]: E1209 14:56:59.711228 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:56:59 crc kubenswrapper[4770]: E1209 14:56:59.713140 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:57:12 crc kubenswrapper[4770]: E1209 14:57:12.591622 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:57:13 crc kubenswrapper[4770]: E1209 14:57:13.705139 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:57:13 crc kubenswrapper[4770]: E1209 14:57:13.705549 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 14:57:13 crc kubenswrapper[4770]: E1209 14:57:13.705750 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 14:57:13 crc kubenswrapper[4770]: E1209 14:57:13.706961 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:57:14 crc kubenswrapper[4770]: I1209 14:57:14.243528 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:57:14 crc kubenswrapper[4770]: I1209 14:57:14.243925 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.047819 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7m5xq"] Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.059679 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gbddk"] Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.069346 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vg222"] Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.079659 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vg222"] Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.088437 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7m5xq"] Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.099629 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gbddk"] Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.605885 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1346b759-25b3-41df-9fa0-b7b1137fce00" path="/var/lib/kubelet/pods/1346b759-25b3-41df-9fa0-b7b1137fce00/volumes" Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.608335 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c0c709-cb21-49e0-ba23-211f0cd1749d" path="/var/lib/kubelet/pods/b0c0c709-cb21-49e0-ba23-211f0cd1749d/volumes" Dec 09 14:57:22 crc kubenswrapper[4770]: I1209 14:57:22.612949 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d4710e-a3e8-493e-9f2c-38839187d587" path="/var/lib/kubelet/pods/f4d4710e-a3e8-493e-9f2c-38839187d587/volumes" Dec 09 14:57:25 crc kubenswrapper[4770]: E1209 14:57:25.590390 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:57:26 crc kubenswrapper[4770]: E1209 14:57:26.592144 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:57:39 crc kubenswrapper[4770]: E1209 14:57:39.591344 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:57:39 crc kubenswrapper[4770]: E1209 14:57:39.591667 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:57:44 crc kubenswrapper[4770]: I1209 14:57:44.243947 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:57:44 crc kubenswrapper[4770]: I1209 14:57:44.244592 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:57:51 crc kubenswrapper[4770]: E1209 14:57:51.591140 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:57:52 crc kubenswrapper[4770]: I1209 14:57:52.053058 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bwk2j"] Dec 09 14:57:52 crc kubenswrapper[4770]: I1209 14:57:52.066286 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bwk2j"] Dec 09 14:57:52 crc kubenswrapper[4770]: I1209 14:57:52.602008 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd509c2-c0ac-450e-84d3-14e9e8935f1c" path="/var/lib/kubelet/pods/efd509c2-c0ac-450e-84d3-14e9e8935f1c/volumes" Dec 09 14:57:54 crc kubenswrapper[4770]: E1209 14:57:54.591149 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:57:59 crc kubenswrapper[4770]: I1209 14:57:59.035124 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-k7gz4"] Dec 09 14:57:59 crc kubenswrapper[4770]: I1209 14:57:59.050707 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-k7gz4"] Dec 09 14:58:00 crc kubenswrapper[4770]: I1209 14:58:00.600704 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="017d5a4f-99ba-4d3f-9053-207e6f414ab1" path="/var/lib/kubelet/pods/017d5a4f-99ba-4d3f-9053-207e6f414ab1/volumes" Dec 09 14:58:01 crc kubenswrapper[4770]: I1209 14:58:01.040392 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dpmnb"] Dec 09 14:58:01 crc kubenswrapper[4770]: I1209 14:58:01.049842 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dpmnb"] Dec 09 14:58:02 crc kubenswrapper[4770]: I1209 14:58:02.614694 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7669f5b-7406-4ef5-833b-f69821551b08" path="/var/lib/kubelet/pods/b7669f5b-7406-4ef5-833b-f69821551b08/volumes" Dec 09 14:58:03 crc kubenswrapper[4770]: E1209 14:58:03.591314 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:58:07 crc kubenswrapper[4770]: E1209 14:58:07.590369 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:58:10 crc kubenswrapper[4770]: I1209 14:58:10.034106 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-2k6db"] Dec 09 14:58:10 crc kubenswrapper[4770]: I1209 14:58:10.045461 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-2k6db"] Dec 09 14:58:10 crc kubenswrapper[4770]: I1209 14:58:10.604548 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011902e7-c27f-4298-abd3-93eea4d5c579" path="/var/lib/kubelet/pods/011902e7-c27f-4298-abd3-93eea4d5c579/volumes" Dec 09 14:58:14 crc kubenswrapper[4770]: I1209 14:58:14.243566 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:58:14 crc kubenswrapper[4770]: I1209 14:58:14.243983 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:58:14 crc kubenswrapper[4770]: I1209 14:58:14.244040 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 14:58:14 crc kubenswrapper[4770]: I1209 14:58:14.244976 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f547c2f2a6f46a3f60619acf50fcf46bf0865aeaee5c1b464656d2881b7b9a43"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:58:14 crc kubenswrapper[4770]: I1209 14:58:14.245047 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://f547c2f2a6f46a3f60619acf50fcf46bf0865aeaee5c1b464656d2881b7b9a43" gracePeriod=600 Dec 09 14:58:15 crc kubenswrapper[4770]: I1209 14:58:15.136275 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="f547c2f2a6f46a3f60619acf50fcf46bf0865aeaee5c1b464656d2881b7b9a43" exitCode=0 Dec 09 14:58:15 crc kubenswrapper[4770]: I1209 14:58:15.136929 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"f547c2f2a6f46a3f60619acf50fcf46bf0865aeaee5c1b464656d2881b7b9a43"} Dec 09 14:58:15 crc kubenswrapper[4770]: I1209 14:58:15.136962 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929"} Dec 09 14:58:15 crc kubenswrapper[4770]: I1209 14:58:15.137017 4770 scope.go:117] "RemoveContainer" containerID="47834a291afba3a387ac82f02185c336340993dcccaf2c5585ee47a1887d997b" Dec 09 14:58:17 crc kubenswrapper[4770]: E1209 14:58:17.590669 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:58:19 crc kubenswrapper[4770]: E1209 14:58:19.590318 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:58:21 crc kubenswrapper[4770]: I1209 14:58:21.452903 4770 scope.go:117] "RemoveContainer" containerID="9ef5bd99517f705fc3c0911971efa70546fd0f1113c3d6496c609e89f7f6d6bd" Dec 09 14:58:21 crc kubenswrapper[4770]: I1209 14:58:21.535934 4770 scope.go:117] "RemoveContainer" containerID="2825d1689691058f8fb46a00939a2daa4df66362ee5fa626f8922f60b83cf711" Dec 09 14:58:21 crc kubenswrapper[4770]: I1209 14:58:21.636973 4770 scope.go:117] "RemoveContainer" containerID="acb47c1355df5a7b18e5cccaf6e6aa2b38b84a39db33f01c8e8af0173cea0f8f" Dec 09 14:58:21 crc kubenswrapper[4770]: I1209 14:58:21.706948 4770 scope.go:117] "RemoveContainer" containerID="e9c8c2372e3d88bd0ebf9adedc755ef966edd0b442bb925b096e3cd4bda2a85f" Dec 09 14:58:21 crc kubenswrapper[4770]: I1209 14:58:21.756092 4770 scope.go:117] "RemoveContainer" containerID="bb6c72299af868fc9e19c08ee82ba82f5f78117c5f167dfe7a3996fb401c3ff9" Dec 09 14:58:21 crc kubenswrapper[4770]: I1209 14:58:21.818275 4770 scope.go:117] "RemoveContainer" containerID="53a17b5b76555580cebf00cc3ccdfccc549095929ee984660b7daf6437f7f746" Dec 09 14:58:21 crc kubenswrapper[4770]: I1209 14:58:21.887229 4770 scope.go:117] "RemoveContainer" containerID="c3f5c3aff4d7cd88503269c521e5f346c17559e69d1db9d9b3d4514c0aaf414b" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.309771 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sp7lt"] Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.314404 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.319016 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sp7lt"] Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.563479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g785c\" (UniqueName: \"kubernetes.io/projected/2e0cac97-2c80-4221-99e6-a82ec89042ff-kube-api-access-g785c\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.563893 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-utilities\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.564008 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-catalog-content\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.666077 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-utilities\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.666234 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-catalog-content\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.666402 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g785c\" (UniqueName: \"kubernetes.io/projected/2e0cac97-2c80-4221-99e6-a82ec89042ff-kube-api-access-g785c\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.667656 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-utilities\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.667771 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-catalog-content\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.686648 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g785c\" (UniqueName: \"kubernetes.io/projected/2e0cac97-2c80-4221-99e6-a82ec89042ff-kube-api-access-g785c\") pod \"redhat-operators-sp7lt\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:24 crc kubenswrapper[4770]: I1209 14:58:24.938647 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:25 crc kubenswrapper[4770]: I1209 14:58:25.466961 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sp7lt"] Dec 09 14:58:26 crc kubenswrapper[4770]: I1209 14:58:26.319553 4770 generic.go:334] "Generic (PLEG): container finished" podID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerID="430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c" exitCode=0 Dec 09 14:58:26 crc kubenswrapper[4770]: I1209 14:58:26.319866 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7lt" event={"ID":"2e0cac97-2c80-4221-99e6-a82ec89042ff","Type":"ContainerDied","Data":"430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c"} Dec 09 14:58:26 crc kubenswrapper[4770]: I1209 14:58:26.319899 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7lt" event={"ID":"2e0cac97-2c80-4221-99e6-a82ec89042ff","Type":"ContainerStarted","Data":"27decd7725cf9213f2c509687c5ef436d71c1fb3bde4f6e23841f96a7996985f"} Dec 09 14:58:28 crc kubenswrapper[4770]: I1209 14:58:28.360638 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7lt" event={"ID":"2e0cac97-2c80-4221-99e6-a82ec89042ff","Type":"ContainerStarted","Data":"196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607"} Dec 09 14:58:29 crc kubenswrapper[4770]: E1209 14:58:29.590702 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:58:30 crc kubenswrapper[4770]: E1209 14:58:30.590676 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:58:34 crc kubenswrapper[4770]: I1209 14:58:34.428126 4770 generic.go:334] "Generic (PLEG): container finished" podID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerID="196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607" exitCode=0 Dec 09 14:58:34 crc kubenswrapper[4770]: I1209 14:58:34.428197 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7lt" event={"ID":"2e0cac97-2c80-4221-99e6-a82ec89042ff","Type":"ContainerDied","Data":"196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607"} Dec 09 14:58:35 crc kubenswrapper[4770]: I1209 14:58:35.440508 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7lt" event={"ID":"2e0cac97-2c80-4221-99e6-a82ec89042ff","Type":"ContainerStarted","Data":"af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a"} Dec 09 14:58:35 crc kubenswrapper[4770]: I1209 14:58:35.467903 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sp7lt" podStartSLOduration=2.930520034 podStartE2EDuration="11.46786693s" podCreationTimestamp="2025-12-09 14:58:24 +0000 UTC" firstStartedPulling="2025-12-09 14:58:26.32216087 +0000 UTC m=+2138.218363006" lastFinishedPulling="2025-12-09 14:58:34.859507766 +0000 UTC m=+2146.755709902" observedRunningTime="2025-12-09 14:58:35.464313453 +0000 UTC m=+2147.360515589" watchObservedRunningTime="2025-12-09 14:58:35.46786693 +0000 UTC m=+2147.364069066" Dec 09 14:58:41 crc kubenswrapper[4770]: E1209 14:58:41.591236 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:58:42 crc kubenswrapper[4770]: E1209 14:58:42.589536 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:58:44 crc kubenswrapper[4770]: I1209 14:58:44.939693 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:44 crc kubenswrapper[4770]: I1209 14:58:44.940175 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:44 crc kubenswrapper[4770]: I1209 14:58:44.996325 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:45 crc kubenswrapper[4770]: I1209 14:58:45.650539 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:45 crc kubenswrapper[4770]: I1209 14:58:45.706464 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sp7lt"] Dec 09 14:58:47 crc kubenswrapper[4770]: I1209 14:58:47.599119 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sp7lt" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerName="registry-server" containerID="cri-o://af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a" gracePeriod=2 Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.228407 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.278250 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-utilities\") pod \"2e0cac97-2c80-4221-99e6-a82ec89042ff\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.278415 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g785c\" (UniqueName: \"kubernetes.io/projected/2e0cac97-2c80-4221-99e6-a82ec89042ff-kube-api-access-g785c\") pod \"2e0cac97-2c80-4221-99e6-a82ec89042ff\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.278487 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-catalog-content\") pod \"2e0cac97-2c80-4221-99e6-a82ec89042ff\" (UID: \"2e0cac97-2c80-4221-99e6-a82ec89042ff\") " Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.280482 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-utilities" (OuterVolumeSpecName: "utilities") pod "2e0cac97-2c80-4221-99e6-a82ec89042ff" (UID: "2e0cac97-2c80-4221-99e6-a82ec89042ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.285641 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0cac97-2c80-4221-99e6-a82ec89042ff-kube-api-access-g785c" (OuterVolumeSpecName: "kube-api-access-g785c") pod "2e0cac97-2c80-4221-99e6-a82ec89042ff" (UID: "2e0cac97-2c80-4221-99e6-a82ec89042ff"). InnerVolumeSpecName "kube-api-access-g785c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.380903 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.380938 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g785c\" (UniqueName: \"kubernetes.io/projected/2e0cac97-2c80-4221-99e6-a82ec89042ff-kube-api-access-g785c\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.405103 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e0cac97-2c80-4221-99e6-a82ec89042ff" (UID: "2e0cac97-2c80-4221-99e6-a82ec89042ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.483092 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0cac97-2c80-4221-99e6-a82ec89042ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.610977 4770 generic.go:334] "Generic (PLEG): container finished" podID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerID="af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a" exitCode=0 Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.611022 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7lt" event={"ID":"2e0cac97-2c80-4221-99e6-a82ec89042ff","Type":"ContainerDied","Data":"af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a"} Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.611049 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7lt" event={"ID":"2e0cac97-2c80-4221-99e6-a82ec89042ff","Type":"ContainerDied","Data":"27decd7725cf9213f2c509687c5ef436d71c1fb3bde4f6e23841f96a7996985f"} Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.611070 4770 scope.go:117] "RemoveContainer" containerID="af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.611215 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp7lt" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.643066 4770 scope.go:117] "RemoveContainer" containerID="196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.657455 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sp7lt"] Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.671567 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sp7lt"] Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.682963 4770 scope.go:117] "RemoveContainer" containerID="430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.732611 4770 scope.go:117] "RemoveContainer" containerID="af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a" Dec 09 14:58:48 crc kubenswrapper[4770]: E1209 14:58:48.736248 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a\": container with ID starting with af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a not found: ID does not exist" containerID="af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.736280 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a"} err="failed to get container status \"af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a\": rpc error: code = NotFound desc = could not find container \"af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a\": container with ID starting with af9475b5613848fa8ce074b4cd435cc6b11f6bbcaf0ff6e1a8b93ee82af46a3a not found: ID does not exist" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.736302 4770 scope.go:117] "RemoveContainer" containerID="196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607" Dec 09 14:58:48 crc kubenswrapper[4770]: E1209 14:58:48.736765 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607\": container with ID starting with 196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607 not found: ID does not exist" containerID="196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.736784 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607"} err="failed to get container status \"196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607\": rpc error: code = NotFound desc = could not find container \"196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607\": container with ID starting with 196e17e00bf6ff3f3d4fec67936988f0bccb12c0c3a038a12f951c545b1e7607 not found: ID does not exist" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.736796 4770 scope.go:117] "RemoveContainer" containerID="430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c" Dec 09 14:58:48 crc kubenswrapper[4770]: E1209 14:58:48.737392 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c\": container with ID starting with 430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c not found: ID does not exist" containerID="430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c" Dec 09 14:58:48 crc kubenswrapper[4770]: I1209 14:58:48.737431 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c"} err="failed to get container status \"430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c\": rpc error: code = NotFound desc = could not find container \"430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c\": container with ID starting with 430c91c98d646f98cd8eb40c881a5e786f61aba46b67dab653226e9a0538356c not found: ID does not exist" Dec 09 14:58:50 crc kubenswrapper[4770]: I1209 14:58:50.604547 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" path="/var/lib/kubelet/pods/2e0cac97-2c80-4221-99e6-a82ec89042ff/volumes" Dec 09 14:58:51 crc kubenswrapper[4770]: I1209 14:58:51.048099 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-s92kc"] Dec 09 14:58:51 crc kubenswrapper[4770]: I1209 14:58:51.059393 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6j4pg"] Dec 09 14:58:51 crc kubenswrapper[4770]: I1209 14:58:51.071586 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-s92kc"] Dec 09 14:58:51 crc kubenswrapper[4770]: I1209 14:58:51.080479 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f887-account-create-update-22fct"] Dec 09 14:58:51 crc kubenswrapper[4770]: I1209 14:58:51.088603 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6j4pg"] Dec 09 14:58:51 crc kubenswrapper[4770]: I1209 14:58:51.099233 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f887-account-create-update-22fct"] Dec 09 14:58:52 crc kubenswrapper[4770]: I1209 14:58:52.023398 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4266-account-create-update-lc6m7"] Dec 09 14:58:52 crc kubenswrapper[4770]: I1209 14:58:52.032120 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4266-account-create-update-lc6m7"] Dec 09 14:58:52 crc kubenswrapper[4770]: E1209 14:58:52.591752 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:58:52 crc kubenswrapper[4770]: I1209 14:58:52.606289 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c4b0e5-c858-457d-ad30-71166591f03f" path="/var/lib/kubelet/pods/13c4b0e5-c858-457d-ad30-71166591f03f/volumes" Dec 09 14:58:52 crc kubenswrapper[4770]: I1209 14:58:52.608024 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5700d19d-1648-4915-a6a2-39d8206f438c" path="/var/lib/kubelet/pods/5700d19d-1648-4915-a6a2-39d8206f438c/volumes" Dec 09 14:58:52 crc kubenswrapper[4770]: I1209 14:58:52.610080 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f4b2573-dd09-4e1a-9724-af01d57049ca" path="/var/lib/kubelet/pods/6f4b2573-dd09-4e1a-9724-af01d57049ca/volumes" Dec 09 14:58:52 crc kubenswrapper[4770]: I1209 14:58:52.611398 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5214ff-7385-4380-be7e-1928902df33a" path="/var/lib/kubelet/pods/ff5214ff-7385-4380-be7e-1928902df33a/volumes" Dec 09 14:58:53 crc kubenswrapper[4770]: I1209 14:58:53.052553 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8vx5w"] Dec 09 14:58:53 crc kubenswrapper[4770]: I1209 14:58:53.098049 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7735-account-create-update-5p282"] Dec 09 14:58:53 crc kubenswrapper[4770]: I1209 14:58:53.124226 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7735-account-create-update-5p282"] Dec 09 14:58:53 crc kubenswrapper[4770]: I1209 14:58:53.137255 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8vx5w"] Dec 09 14:58:53 crc kubenswrapper[4770]: E1209 14:58:53.590282 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:58:54 crc kubenswrapper[4770]: I1209 14:58:54.599643 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20162029-a6e5-432d-93e1-54d7d9aeed22" path="/var/lib/kubelet/pods/20162029-a6e5-432d-93e1-54d7d9aeed22/volumes" Dec 09 14:58:54 crc kubenswrapper[4770]: I1209 14:58:54.600856 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d79eb99a-b722-426a-aab8-ff0404972446" path="/var/lib/kubelet/pods/d79eb99a-b722-426a-aab8-ff0404972446/volumes" Dec 09 14:59:06 crc kubenswrapper[4770]: E1209 14:59:06.593650 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:59:08 crc kubenswrapper[4770]: E1209 14:59:08.602284 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:59:18 crc kubenswrapper[4770]: E1209 14:59:18.599848 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:59:20 crc kubenswrapper[4770]: E1209 14:59:20.591081 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:59:22 crc kubenswrapper[4770]: I1209 14:59:22.041896 4770 scope.go:117] "RemoveContainer" containerID="31244015d95e86cf3a0ff943ee3cdaaa825225f962900da68b7f7a8e05d875ff" Dec 09 14:59:22 crc kubenswrapper[4770]: I1209 14:59:22.066274 4770 scope.go:117] "RemoveContainer" containerID="2b28c4a760dd7a697fe72b42796607fe0e3c4adfbdf1ed1491a16240a6341c14" Dec 09 14:59:22 crc kubenswrapper[4770]: I1209 14:59:22.129775 4770 scope.go:117] "RemoveContainer" containerID="a6b7832d633e1b84ae204cc50e3dc40e5b7dac8b935f9aca182835b63a5713e2" Dec 09 14:59:22 crc kubenswrapper[4770]: I1209 14:59:22.185789 4770 scope.go:117] "RemoveContainer" containerID="9d256ddf3d4add255dde76abadecdd5e04206d525ec019c04e3a1508bbcc96bd" Dec 09 14:59:22 crc kubenswrapper[4770]: I1209 14:59:22.238383 4770 scope.go:117] "RemoveContainer" containerID="c1d4af286b992f1d650444bce28aff09e3d4c7631473b64bc3e6f29d3185fef5" Dec 09 14:59:22 crc kubenswrapper[4770]: I1209 14:59:22.292492 4770 scope.go:117] "RemoveContainer" containerID="83efbc95987ae6603ab00b88f9ac384854c2d318fbb55261d926363a7b2630f5" Dec 09 14:59:27 crc kubenswrapper[4770]: I1209 14:59:27.048290 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-94lv6"] Dec 09 14:59:27 crc kubenswrapper[4770]: I1209 14:59:27.059245 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-94lv6"] Dec 09 14:59:28 crc kubenswrapper[4770]: I1209 14:59:28.605919 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959155d5-0b58-4005-b2cf-5e2dd53e4f06" path="/var/lib/kubelet/pods/959155d5-0b58-4005-b2cf-5e2dd53e4f06/volumes" Dec 09 14:59:33 crc kubenswrapper[4770]: E1209 14:59:33.591821 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:59:34 crc kubenswrapper[4770]: E1209 14:59:34.591390 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:59:46 crc kubenswrapper[4770]: E1209 14:59:46.591321 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 14:59:47 crc kubenswrapper[4770]: E1209 14:59:47.593570 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:59:52 crc kubenswrapper[4770]: I1209 14:59:52.088614 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-q88sr"] Dec 09 14:59:52 crc kubenswrapper[4770]: I1209 14:59:52.102225 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-q88sr"] Dec 09 14:59:52 crc kubenswrapper[4770]: I1209 14:59:52.608050 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47bdf914-718d-409a-a1d2-33f40c08e382" path="/var/lib/kubelet/pods/47bdf914-718d-409a-a1d2-33f40c08e382/volumes" Dec 09 14:59:58 crc kubenswrapper[4770]: E1209 14:59:58.597090 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 14:59:59 crc kubenswrapper[4770]: I1209 14:59:59.027013 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95258"] Dec 09 14:59:59 crc kubenswrapper[4770]: I1209 14:59:59.037005 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95258"] Dec 09 14:59:59 crc kubenswrapper[4770]: E1209 14:59:59.590304 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.163213 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4"] Dec 09 15:00:00 crc kubenswrapper[4770]: E1209 15:00:00.164501 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerName="extract-utilities" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.164534 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerName="extract-utilities" Dec 09 15:00:00 crc kubenswrapper[4770]: E1209 15:00:00.164574 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerName="registry-server" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.164580 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerName="registry-server" Dec 09 15:00:00 crc kubenswrapper[4770]: E1209 15:00:00.164608 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerName="extract-content" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.164614 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerName="extract-content" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.165411 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0cac97-2c80-4221-99e6-a82ec89042ff" containerName="registry-server" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.168135 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.171820 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.176365 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.176395 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4"] Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.277751 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3acf34a-2bce-4aea-bd90-34be5cf1a752-secret-volume\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.277800 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3acf34a-2bce-4aea-bd90-34be5cf1a752-config-volume\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.277940 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2tmg\" (UniqueName: \"kubernetes.io/projected/f3acf34a-2bce-4aea-bd90-34be5cf1a752-kube-api-access-b2tmg\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.380304 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2tmg\" (UniqueName: \"kubernetes.io/projected/f3acf34a-2bce-4aea-bd90-34be5cf1a752-kube-api-access-b2tmg\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.380502 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3acf34a-2bce-4aea-bd90-34be5cf1a752-secret-volume\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.380536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3acf34a-2bce-4aea-bd90-34be5cf1a752-config-volume\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.381847 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3acf34a-2bce-4aea-bd90-34be5cf1a752-config-volume\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.386857 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3acf34a-2bce-4aea-bd90-34be5cf1a752-secret-volume\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.398781 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2tmg\" (UniqueName: \"kubernetes.io/projected/f3acf34a-2bce-4aea-bd90-34be5cf1a752-kube-api-access-b2tmg\") pod \"collect-profiles-29421540-nqnq4\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.508814 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.612161 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef9f303-cdd6-4694-856c-21a1589935dd" path="/var/lib/kubelet/pods/3ef9f303-cdd6-4694-856c-21a1589935dd/volumes" Dec 09 15:00:00 crc kubenswrapper[4770]: I1209 15:00:00.983534 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4"] Dec 09 15:00:01 crc kubenswrapper[4770]: I1209 15:00:01.299665 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" event={"ID":"f3acf34a-2bce-4aea-bd90-34be5cf1a752","Type":"ContainerStarted","Data":"50a1cc15553812ad2f10915a554a60d2f92061c51707775c68d17ee44ff16270"} Dec 09 15:00:01 crc kubenswrapper[4770]: I1209 15:00:01.299712 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" event={"ID":"f3acf34a-2bce-4aea-bd90-34be5cf1a752","Type":"ContainerStarted","Data":"9707b93af44b41d90d532a2895c6859d5bb97b59e7578549bbd72e5cbf759cb8"} Dec 09 15:00:02 crc kubenswrapper[4770]: I1209 15:00:02.310359 4770 generic.go:334] "Generic (PLEG): container finished" podID="f3acf34a-2bce-4aea-bd90-34be5cf1a752" containerID="50a1cc15553812ad2f10915a554a60d2f92061c51707775c68d17ee44ff16270" exitCode=0 Dec 09 15:00:02 crc kubenswrapper[4770]: I1209 15:00:02.310414 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" event={"ID":"f3acf34a-2bce-4aea-bd90-34be5cf1a752","Type":"ContainerDied","Data":"50a1cc15553812ad2f10915a554a60d2f92061c51707775c68d17ee44ff16270"} Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.746120 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.855588 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3acf34a-2bce-4aea-bd90-34be5cf1a752-config-volume" (OuterVolumeSpecName: "config-volume") pod "f3acf34a-2bce-4aea-bd90-34be5cf1a752" (UID: "f3acf34a-2bce-4aea-bd90-34be5cf1a752"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.856262 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3acf34a-2bce-4aea-bd90-34be5cf1a752-config-volume\") pod \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.856489 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2tmg\" (UniqueName: \"kubernetes.io/projected/f3acf34a-2bce-4aea-bd90-34be5cf1a752-kube-api-access-b2tmg\") pod \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.856546 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3acf34a-2bce-4aea-bd90-34be5cf1a752-secret-volume\") pod \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\" (UID: \"f3acf34a-2bce-4aea-bd90-34be5cf1a752\") " Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.857517 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3acf34a-2bce-4aea-bd90-34be5cf1a752-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.862382 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3acf34a-2bce-4aea-bd90-34be5cf1a752-kube-api-access-b2tmg" (OuterVolumeSpecName: "kube-api-access-b2tmg") pod "f3acf34a-2bce-4aea-bd90-34be5cf1a752" (UID: "f3acf34a-2bce-4aea-bd90-34be5cf1a752"). InnerVolumeSpecName "kube-api-access-b2tmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.862463 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3acf34a-2bce-4aea-bd90-34be5cf1a752-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f3acf34a-2bce-4aea-bd90-34be5cf1a752" (UID: "f3acf34a-2bce-4aea-bd90-34be5cf1a752"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.959437 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2tmg\" (UniqueName: \"kubernetes.io/projected/f3acf34a-2bce-4aea-bd90-34be5cf1a752-kube-api-access-b2tmg\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:03 crc kubenswrapper[4770]: I1209 15:00:03.959478 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3acf34a-2bce-4aea-bd90-34be5cf1a752-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:04 crc kubenswrapper[4770]: I1209 15:00:04.331031 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" event={"ID":"f3acf34a-2bce-4aea-bd90-34be5cf1a752","Type":"ContainerDied","Data":"9707b93af44b41d90d532a2895c6859d5bb97b59e7578549bbd72e5cbf759cb8"} Dec 09 15:00:04 crc kubenswrapper[4770]: I1209 15:00:04.331084 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9707b93af44b41d90d532a2895c6859d5bb97b59e7578549bbd72e5cbf759cb8" Dec 09 15:00:04 crc kubenswrapper[4770]: I1209 15:00:04.331121 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4" Dec 09 15:00:04 crc kubenswrapper[4770]: I1209 15:00:04.392602 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7"] Dec 09 15:00:04 crc kubenswrapper[4770]: I1209 15:00:04.402695 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-8hdl7"] Dec 09 15:00:04 crc kubenswrapper[4770]: I1209 15:00:04.602565 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086" path="/var/lib/kubelet/pods/07bf67dc-d1b4-4cf6-9ab2-f2a4270f9086/volumes" Dec 09 15:00:12 crc kubenswrapper[4770]: E1209 15:00:12.591662 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:00:12 crc kubenswrapper[4770]: E1209 15:00:12.591791 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:00:14 crc kubenswrapper[4770]: I1209 15:00:14.243091 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:00:14 crc kubenswrapper[4770]: I1209 15:00:14.244891 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:00:21 crc kubenswrapper[4770]: I1209 15:00:21.036595 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-nr4sn"] Dec 09 15:00:21 crc kubenswrapper[4770]: I1209 15:00:21.053463 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-nr4sn"] Dec 09 15:00:22 crc kubenswrapper[4770]: I1209 15:00:22.460614 4770 scope.go:117] "RemoveContainer" containerID="76ec275019705a80b3d8b44ee3e351629255b750fd74cb2f3437f86cfdbd7fa9" Dec 09 15:00:22 crc kubenswrapper[4770]: I1209 15:00:22.504602 4770 scope.go:117] "RemoveContainer" containerID="529964bbfbc9f68c4ab4597a12859313aadc1f6d9ae136cd8d4473b7b6c88166" Dec 09 15:00:22 crc kubenswrapper[4770]: I1209 15:00:22.554023 4770 scope.go:117] "RemoveContainer" containerID="430ea0cf72b2ab03a618fadf887b3a95a3522f4d82b2e7252511d1c7ed91db98" Dec 09 15:00:22 crc kubenswrapper[4770]: I1209 15:00:22.595483 4770 scope.go:117] "RemoveContainer" containerID="0e6bfc815819f1d161e04d054694241812021fef1409346a3c99fe68253fc865" Dec 09 15:00:22 crc kubenswrapper[4770]: I1209 15:00:22.600270 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9decc75-a5aa-4b47-afb5-4f7c95b3796e" path="/var/lib/kubelet/pods/b9decc75-a5aa-4b47-afb5-4f7c95b3796e/volumes" Dec 09 15:00:26 crc kubenswrapper[4770]: E1209 15:00:26.591083 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:00:27 crc kubenswrapper[4770]: E1209 15:00:27.593524 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:00:40 crc kubenswrapper[4770]: E1209 15:00:40.591324 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:00:40 crc kubenswrapper[4770]: E1209 15:00:40.592766 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:00:44 crc kubenswrapper[4770]: I1209 15:00:44.243282 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:00:44 crc kubenswrapper[4770]: I1209 15:00:44.243933 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.645233 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lctvc"] Dec 09 15:00:48 crc kubenswrapper[4770]: E1209 15:00:48.646775 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3acf34a-2bce-4aea-bd90-34be5cf1a752" containerName="collect-profiles" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.646868 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3acf34a-2bce-4aea-bd90-34be5cf1a752" containerName="collect-profiles" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.647173 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3acf34a-2bce-4aea-bd90-34be5cf1a752" containerName="collect-profiles" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.649249 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lctvc"] Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.649424 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.757912 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7lx\" (UniqueName: \"kubernetes.io/projected/e9c6d8eb-dff6-4490-85de-29e306e835ad-kube-api-access-qb7lx\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.758011 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-utilities\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.758117 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-catalog-content\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.862701 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-catalog-content\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.862934 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7lx\" (UniqueName: \"kubernetes.io/projected/e9c6d8eb-dff6-4490-85de-29e306e835ad-kube-api-access-qb7lx\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.863025 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-utilities\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.863982 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-utilities\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.864180 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-catalog-content\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:48 crc kubenswrapper[4770]: I1209 15:00:48.890000 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7lx\" (UniqueName: \"kubernetes.io/projected/e9c6d8eb-dff6-4490-85de-29e306e835ad-kube-api-access-qb7lx\") pod \"certified-operators-lctvc\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:49 crc kubenswrapper[4770]: I1209 15:00:49.009997 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:49 crc kubenswrapper[4770]: I1209 15:00:49.540139 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lctvc"] Dec 09 15:00:49 crc kubenswrapper[4770]: I1209 15:00:49.848532 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lctvc" event={"ID":"e9c6d8eb-dff6-4490-85de-29e306e835ad","Type":"ContainerStarted","Data":"df8190881f76e01e0692173e1bb79de3b4ecaf1af1b7dd91d8736a5858af7152"} Dec 09 15:00:50 crc kubenswrapper[4770]: I1209 15:00:50.859692 4770 generic.go:334] "Generic (PLEG): container finished" podID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerID="3355157c8bc407f887f5a6e00930f41d3155b3ba9482294407684ab834e7c02e" exitCode=0 Dec 09 15:00:50 crc kubenswrapper[4770]: I1209 15:00:50.860129 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lctvc" event={"ID":"e9c6d8eb-dff6-4490-85de-29e306e835ad","Type":"ContainerDied","Data":"3355157c8bc407f887f5a6e00930f41d3155b3ba9482294407684ab834e7c02e"} Dec 09 15:00:52 crc kubenswrapper[4770]: E1209 15:00:52.591206 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:00:52 crc kubenswrapper[4770]: E1209 15:00:52.591291 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:00:55 crc kubenswrapper[4770]: I1209 15:00:55.910242 4770 generic.go:334] "Generic (PLEG): container finished" podID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerID="69845fcfc3660a0c52bc7e9b405d465c9f6c095ce695f5b93760ee821b15b39d" exitCode=0 Dec 09 15:00:55 crc kubenswrapper[4770]: I1209 15:00:55.910307 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lctvc" event={"ID":"e9c6d8eb-dff6-4490-85de-29e306e835ad","Type":"ContainerDied","Data":"69845fcfc3660a0c52bc7e9b405d465c9f6c095ce695f5b93760ee821b15b39d"} Dec 09 15:00:56 crc kubenswrapper[4770]: I1209 15:00:56.923079 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lctvc" event={"ID":"e9c6d8eb-dff6-4490-85de-29e306e835ad","Type":"ContainerStarted","Data":"e559554ebcb0f72cdf859fe88160472f7b6e0a9f53b55924d178756eb4a2e86e"} Dec 09 15:00:56 crc kubenswrapper[4770]: I1209 15:00:56.946614 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lctvc" podStartSLOduration=3.464205897 podStartE2EDuration="8.946574676s" podCreationTimestamp="2025-12-09 15:00:48 +0000 UTC" firstStartedPulling="2025-12-09 15:00:50.861534588 +0000 UTC m=+2282.757736724" lastFinishedPulling="2025-12-09 15:00:56.343903347 +0000 UTC m=+2288.240105503" observedRunningTime="2025-12-09 15:00:56.940299806 +0000 UTC m=+2288.836501942" watchObservedRunningTime="2025-12-09 15:00:56.946574676 +0000 UTC m=+2288.842776812" Dec 09 15:00:59 crc kubenswrapper[4770]: I1209 15:00:59.011430 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:59 crc kubenswrapper[4770]: I1209 15:00:59.011795 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:00:59 crc kubenswrapper[4770]: I1209 15:00:59.074661 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.148889 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29421541-9p2fg"] Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.150920 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.165191 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421541-9p2fg"] Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.304215 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-combined-ca-bundle\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.304293 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxqp\" (UniqueName: \"kubernetes.io/projected/35d24b29-d465-4502-bdf7-e4bd6479926c-kube-api-access-xpxqp\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.304483 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-fernet-keys\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.304541 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-config-data\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.406386 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-fernet-keys\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.406489 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-config-data\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.406552 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-combined-ca-bundle\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.406602 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxqp\" (UniqueName: \"kubernetes.io/projected/35d24b29-d465-4502-bdf7-e4bd6479926c-kube-api-access-xpxqp\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.414776 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-config-data\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.419661 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-fernet-keys\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.420566 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-combined-ca-bundle\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.441593 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxqp\" (UniqueName: \"kubernetes.io/projected/35d24b29-d465-4502-bdf7-e4bd6479926c-kube-api-access-xpxqp\") pod \"keystone-cron-29421541-9p2fg\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.471625 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.926365 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421541-9p2fg"] Dec 09 15:01:00 crc kubenswrapper[4770]: I1209 15:01:00.971781 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421541-9p2fg" event={"ID":"35d24b29-d465-4502-bdf7-e4bd6479926c","Type":"ContainerStarted","Data":"5dc18ff2bd77533e38b90d99f8b4f153f0671cd7f133da543d1c95e35d6c47b5"} Dec 09 15:01:01 crc kubenswrapper[4770]: I1209 15:01:01.985568 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421541-9p2fg" event={"ID":"35d24b29-d465-4502-bdf7-e4bd6479926c","Type":"ContainerStarted","Data":"15a53df09ea6614bcc7b55a5207aae3a3c8a450893662bc0005c66fcd202148a"} Dec 09 15:01:02 crc kubenswrapper[4770]: I1209 15:01:02.007239 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29421541-9p2fg" podStartSLOduration=2.007216434 podStartE2EDuration="2.007216434s" podCreationTimestamp="2025-12-09 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:01:02.003367938 +0000 UTC m=+2293.899570084" watchObservedRunningTime="2025-12-09 15:01:02.007216434 +0000 UTC m=+2293.903418570" Dec 09 15:01:05 crc kubenswrapper[4770]: I1209 15:01:05.040371 4770 generic.go:334] "Generic (PLEG): container finished" podID="35d24b29-d465-4502-bdf7-e4bd6479926c" containerID="15a53df09ea6614bcc7b55a5207aae3a3c8a450893662bc0005c66fcd202148a" exitCode=0 Dec 09 15:01:05 crc kubenswrapper[4770]: I1209 15:01:05.040671 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421541-9p2fg" event={"ID":"35d24b29-d465-4502-bdf7-e4bd6479926c","Type":"ContainerDied","Data":"15a53df09ea6614bcc7b55a5207aae3a3c8a450893662bc0005c66fcd202148a"} Dec 09 15:01:05 crc kubenswrapper[4770]: E1209 15:01:05.591718 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.472843 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.582646 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-combined-ca-bundle\") pod \"35d24b29-d465-4502-bdf7-e4bd6479926c\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.582706 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-config-data\") pod \"35d24b29-d465-4502-bdf7-e4bd6479926c\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.582861 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxqp\" (UniqueName: \"kubernetes.io/projected/35d24b29-d465-4502-bdf7-e4bd6479926c-kube-api-access-xpxqp\") pod \"35d24b29-d465-4502-bdf7-e4bd6479926c\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.582885 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-fernet-keys\") pod \"35d24b29-d465-4502-bdf7-e4bd6479926c\" (UID: \"35d24b29-d465-4502-bdf7-e4bd6479926c\") " Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.589702 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d24b29-d465-4502-bdf7-e4bd6479926c-kube-api-access-xpxqp" (OuterVolumeSpecName: "kube-api-access-xpxqp") pod "35d24b29-d465-4502-bdf7-e4bd6479926c" (UID: "35d24b29-d465-4502-bdf7-e4bd6479926c"). InnerVolumeSpecName "kube-api-access-xpxqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:01:06 crc kubenswrapper[4770]: E1209 15:01:06.592874 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.598899 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "35d24b29-d465-4502-bdf7-e4bd6479926c" (UID: "35d24b29-d465-4502-bdf7-e4bd6479926c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.620009 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35d24b29-d465-4502-bdf7-e4bd6479926c" (UID: "35d24b29-d465-4502-bdf7-e4bd6479926c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.645289 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-config-data" (OuterVolumeSpecName: "config-data") pod "35d24b29-d465-4502-bdf7-e4bd6479926c" (UID: "35d24b29-d465-4502-bdf7-e4bd6479926c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.685570 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.685608 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.685621 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpxqp\" (UniqueName: \"kubernetes.io/projected/35d24b29-d465-4502-bdf7-e4bd6479926c-kube-api-access-xpxqp\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:06 crc kubenswrapper[4770]: I1209 15:01:06.685634 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35d24b29-d465-4502-bdf7-e4bd6479926c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:07 crc kubenswrapper[4770]: I1209 15:01:07.070409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421541-9p2fg" event={"ID":"35d24b29-d465-4502-bdf7-e4bd6479926c","Type":"ContainerDied","Data":"5dc18ff2bd77533e38b90d99f8b4f153f0671cd7f133da543d1c95e35d6c47b5"} Dec 09 15:01:07 crc kubenswrapper[4770]: I1209 15:01:07.070457 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc18ff2bd77533e38b90d99f8b4f153f0671cd7f133da543d1c95e35d6c47b5" Dec 09 15:01:07 crc kubenswrapper[4770]: I1209 15:01:07.070542 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421541-9p2fg" Dec 09 15:01:09 crc kubenswrapper[4770]: I1209 15:01:09.112837 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:01:09 crc kubenswrapper[4770]: I1209 15:01:09.489649 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lctvc"] Dec 09 15:01:09 crc kubenswrapper[4770]: I1209 15:01:09.540047 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7m4fp"] Dec 09 15:01:09 crc kubenswrapper[4770]: I1209 15:01:09.540325 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7m4fp" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerName="registry-server" containerID="cri-o://8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a" gracePeriod=2 Dec 09 15:01:09 crc kubenswrapper[4770]: E1209 15:01:09.784539 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d57bff_0a2d_4dc2_a9d3_a678f6030fe3.slice/crio-conmon-8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a.scope\": RecentStats: unable to find data in memory cache]" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.129265 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.142344 4770 generic.go:334] "Generic (PLEG): container finished" podID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerID="8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a" exitCode=0 Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.142783 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m4fp" event={"ID":"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3","Type":"ContainerDied","Data":"8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a"} Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.142845 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m4fp" event={"ID":"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3","Type":"ContainerDied","Data":"5b279de969b9ae3db827def8804d72534ff37de5608a30e7639efca123d038ca"} Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.142866 4770 scope.go:117] "RemoveContainer" containerID="8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.142987 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m4fp" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.196216 4770 scope.go:117] "RemoveContainer" containerID="1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.232506 4770 scope.go:117] "RemoveContainer" containerID="8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.276465 4770 scope.go:117] "RemoveContainer" containerID="8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a" Dec 09 15:01:10 crc kubenswrapper[4770]: E1209 15:01:10.276997 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a\": container with ID starting with 8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a not found: ID does not exist" containerID="8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.277080 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a"} err="failed to get container status \"8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a\": rpc error: code = NotFound desc = could not find container \"8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a\": container with ID starting with 8667bcab79202cf18dd004cde9f93d88ef0e500b04759881f9aa82b68425a59a not found: ID does not exist" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.277110 4770 scope.go:117] "RemoveContainer" containerID="1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd" Dec 09 15:01:10 crc kubenswrapper[4770]: E1209 15:01:10.277576 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd\": container with ID starting with 1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd not found: ID does not exist" containerID="1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.277631 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd"} err="failed to get container status \"1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd\": rpc error: code = NotFound desc = could not find container \"1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd\": container with ID starting with 1e73624a754954b3c1c5a042a0ca5c7e30a7320209583f9f6de57e29277237bd not found: ID does not exist" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.277677 4770 scope.go:117] "RemoveContainer" containerID="8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0" Dec 09 15:01:10 crc kubenswrapper[4770]: E1209 15:01:10.277989 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0\": container with ID starting with 8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0 not found: ID does not exist" containerID="8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.278013 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0"} err="failed to get container status \"8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0\": rpc error: code = NotFound desc = could not find container \"8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0\": container with ID starting with 8d4f995231333cb62bd9e97f58d69f54225d98c91051263f150c5637ea0b35d0 not found: ID does not exist" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.278616 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-catalog-content\") pod \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.278881 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrhkv\" (UniqueName: \"kubernetes.io/projected/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-kube-api-access-hrhkv\") pod \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.279010 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-utilities\") pod \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\" (UID: \"37d57bff-0a2d-4dc2-a9d3-a678f6030fe3\") " Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.280188 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-utilities" (OuterVolumeSpecName: "utilities") pod "37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" (UID: "37d57bff-0a2d-4dc2-a9d3-a678f6030fe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.285687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-kube-api-access-hrhkv" (OuterVolumeSpecName: "kube-api-access-hrhkv") pod "37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" (UID: "37d57bff-0a2d-4dc2-a9d3-a678f6030fe3"). InnerVolumeSpecName "kube-api-access-hrhkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.353629 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" (UID: "37d57bff-0a2d-4dc2-a9d3-a678f6030fe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.381562 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrhkv\" (UniqueName: \"kubernetes.io/projected/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-kube-api-access-hrhkv\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.381599 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.381608 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.477456 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7m4fp"] Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.490477 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7m4fp"] Dec 09 15:01:10 crc kubenswrapper[4770]: I1209 15:01:10.600422 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" path="/var/lib/kubelet/pods/37d57bff-0a2d-4dc2-a9d3-a678f6030fe3/volumes" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.243177 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.243618 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.243665 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.244460 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.244513 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" gracePeriod=600 Dec 09 15:01:14 crc kubenswrapper[4770]: E1209 15:01:14.378354 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.564688 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kc246"] Dec 09 15:01:14 crc kubenswrapper[4770]: E1209 15:01:14.565191 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d24b29-d465-4502-bdf7-e4bd6479926c" containerName="keystone-cron" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.565210 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d24b29-d465-4502-bdf7-e4bd6479926c" containerName="keystone-cron" Dec 09 15:01:14 crc kubenswrapper[4770]: E1209 15:01:14.565238 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerName="registry-server" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.565245 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerName="registry-server" Dec 09 15:01:14 crc kubenswrapper[4770]: E1209 15:01:14.565258 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerName="extract-content" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.565264 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerName="extract-content" Dec 09 15:01:14 crc kubenswrapper[4770]: E1209 15:01:14.565275 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerName="extract-utilities" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.565282 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerName="extract-utilities" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.565465 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d24b29-d465-4502-bdf7-e4bd6479926c" containerName="keystone-cron" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.565491 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d57bff-0a2d-4dc2-a9d3-a678f6030fe3" containerName="registry-server" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.567070 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.571452 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-catalog-content\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.571541 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7mvs\" (UniqueName: \"kubernetes.io/projected/00be71da-d168-4d14-9319-5000d9be3cb5-kube-api-access-c7mvs\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.571615 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-utilities\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.574642 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kc246"] Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.674954 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-utilities\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.675097 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-catalog-content\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.675177 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7mvs\" (UniqueName: \"kubernetes.io/projected/00be71da-d168-4d14-9319-5000d9be3cb5-kube-api-access-c7mvs\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.675433 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-utilities\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.675624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-catalog-content\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.696338 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7mvs\" (UniqueName: \"kubernetes.io/projected/00be71da-d168-4d14-9319-5000d9be3cb5-kube-api-access-c7mvs\") pod \"community-operators-kc246\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:14 crc kubenswrapper[4770]: I1209 15:01:14.913659 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:15 crc kubenswrapper[4770]: I1209 15:01:15.218213 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" exitCode=0 Dec 09 15:01:15 crc kubenswrapper[4770]: I1209 15:01:15.218512 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929"} Dec 09 15:01:15 crc kubenswrapper[4770]: I1209 15:01:15.218951 4770 scope.go:117] "RemoveContainer" containerID="f547c2f2a6f46a3f60619acf50fcf46bf0865aeaee5c1b464656d2881b7b9a43" Dec 09 15:01:15 crc kubenswrapper[4770]: I1209 15:01:15.237885 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:01:15 crc kubenswrapper[4770]: E1209 15:01:15.238264 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:01:15 crc kubenswrapper[4770]: I1209 15:01:15.567229 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kc246"] Dec 09 15:01:16 crc kubenswrapper[4770]: I1209 15:01:16.365007 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kc246" event={"ID":"00be71da-d168-4d14-9319-5000d9be3cb5","Type":"ContainerStarted","Data":"8301a024674a89b4b4801880636b8ce3d80a2576d5e2f26f04aa4a0ef569183a"} Dec 09 15:01:17 crc kubenswrapper[4770]: I1209 15:01:17.379911 4770 generic.go:334] "Generic (PLEG): container finished" podID="00be71da-d168-4d14-9319-5000d9be3cb5" containerID="b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e" exitCode=0 Dec 09 15:01:17 crc kubenswrapper[4770]: I1209 15:01:17.379977 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kc246" event={"ID":"00be71da-d168-4d14-9319-5000d9be3cb5","Type":"ContainerDied","Data":"b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e"} Dec 09 15:01:17 crc kubenswrapper[4770]: E1209 15:01:17.589876 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:01:17 crc kubenswrapper[4770]: E1209 15:01:17.589886 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:01:18 crc kubenswrapper[4770]: I1209 15:01:18.403804 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kc246" event={"ID":"00be71da-d168-4d14-9319-5000d9be3cb5","Type":"ContainerStarted","Data":"1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac"} Dec 09 15:01:20 crc kubenswrapper[4770]: I1209 15:01:20.422814 4770 generic.go:334] "Generic (PLEG): container finished" podID="00be71da-d168-4d14-9319-5000d9be3cb5" containerID="1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac" exitCode=0 Dec 09 15:01:20 crc kubenswrapper[4770]: I1209 15:01:20.423001 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kc246" event={"ID":"00be71da-d168-4d14-9319-5000d9be3cb5","Type":"ContainerDied","Data":"1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac"} Dec 09 15:01:22 crc kubenswrapper[4770]: I1209 15:01:22.728678 4770 scope.go:117] "RemoveContainer" containerID="68ab1470e2ce9c49e0a51721502a7ab768924fa479ead3c3f8916880728e50b3" Dec 09 15:01:23 crc kubenswrapper[4770]: I1209 15:01:23.455147 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kc246" event={"ID":"00be71da-d168-4d14-9319-5000d9be3cb5","Type":"ContainerStarted","Data":"26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39"} Dec 09 15:01:23 crc kubenswrapper[4770]: I1209 15:01:23.486449 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kc246" podStartSLOduration=4.084434403 podStartE2EDuration="9.486365822s" podCreationTimestamp="2025-12-09 15:01:14 +0000 UTC" firstStartedPulling="2025-12-09 15:01:17.384184347 +0000 UTC m=+2309.280386523" lastFinishedPulling="2025-12-09 15:01:22.786115806 +0000 UTC m=+2314.682317942" observedRunningTime="2025-12-09 15:01:23.472808163 +0000 UTC m=+2315.369010309" watchObservedRunningTime="2025-12-09 15:01:23.486365822 +0000 UTC m=+2315.382567948" Dec 09 15:01:24 crc kubenswrapper[4770]: I1209 15:01:24.913856 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:24 crc kubenswrapper[4770]: I1209 15:01:24.915335 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:25 crc kubenswrapper[4770]: I1209 15:01:25.990114 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kc246" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="registry-server" probeResult="failure" output=< Dec 09 15:01:25 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Dec 09 15:01:25 crc kubenswrapper[4770]: > Dec 09 15:01:26 crc kubenswrapper[4770]: I1209 15:01:26.588473 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:01:26 crc kubenswrapper[4770]: E1209 15:01:26.588785 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:01:28 crc kubenswrapper[4770]: E1209 15:01:28.600316 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:01:31 crc kubenswrapper[4770]: E1209 15:01:31.589966 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:01:34 crc kubenswrapper[4770]: I1209 15:01:34.964636 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:35 crc kubenswrapper[4770]: I1209 15:01:35.026131 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:35 crc kubenswrapper[4770]: I1209 15:01:35.216714 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kc246"] Dec 09 15:01:36 crc kubenswrapper[4770]: I1209 15:01:36.599937 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kc246" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="registry-server" containerID="cri-o://26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39" gracePeriod=2 Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.213324 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.330324 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7mvs\" (UniqueName: \"kubernetes.io/projected/00be71da-d168-4d14-9319-5000d9be3cb5-kube-api-access-c7mvs\") pod \"00be71da-d168-4d14-9319-5000d9be3cb5\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.330403 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-catalog-content\") pod \"00be71da-d168-4d14-9319-5000d9be3cb5\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.330557 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-utilities\") pod \"00be71da-d168-4d14-9319-5000d9be3cb5\" (UID: \"00be71da-d168-4d14-9319-5000d9be3cb5\") " Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.331831 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-utilities" (OuterVolumeSpecName: "utilities") pod "00be71da-d168-4d14-9319-5000d9be3cb5" (UID: "00be71da-d168-4d14-9319-5000d9be3cb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.337413 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00be71da-d168-4d14-9319-5000d9be3cb5-kube-api-access-c7mvs" (OuterVolumeSpecName: "kube-api-access-c7mvs") pod "00be71da-d168-4d14-9319-5000d9be3cb5" (UID: "00be71da-d168-4d14-9319-5000d9be3cb5"). InnerVolumeSpecName "kube-api-access-c7mvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.385892 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00be71da-d168-4d14-9319-5000d9be3cb5" (UID: "00be71da-d168-4d14-9319-5000d9be3cb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.432617 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.432657 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00be71da-d168-4d14-9319-5000d9be3cb5-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.432673 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7mvs\" (UniqueName: \"kubernetes.io/projected/00be71da-d168-4d14-9319-5000d9be3cb5-kube-api-access-c7mvs\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.613602 4770 generic.go:334] "Generic (PLEG): container finished" podID="00be71da-d168-4d14-9319-5000d9be3cb5" containerID="26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39" exitCode=0 Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.613643 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kc246" event={"ID":"00be71da-d168-4d14-9319-5000d9be3cb5","Type":"ContainerDied","Data":"26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39"} Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.613659 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kc246" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.613670 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kc246" event={"ID":"00be71da-d168-4d14-9319-5000d9be3cb5","Type":"ContainerDied","Data":"8301a024674a89b4b4801880636b8ce3d80a2576d5e2f26f04aa4a0ef569183a"} Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.613691 4770 scope.go:117] "RemoveContainer" containerID="26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.645235 4770 scope.go:117] "RemoveContainer" containerID="1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.649046 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kc246"] Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.658143 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kc246"] Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.668820 4770 scope.go:117] "RemoveContainer" containerID="b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.714550 4770 scope.go:117] "RemoveContainer" containerID="26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39" Dec 09 15:01:37 crc kubenswrapper[4770]: E1209 15:01:37.715016 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39\": container with ID starting with 26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39 not found: ID does not exist" containerID="26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.715086 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39"} err="failed to get container status \"26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39\": rpc error: code = NotFound desc = could not find container \"26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39\": container with ID starting with 26afc29864f92ef0310425f3dcfc6a7c882544470cf1513bbb34508ec2652b39 not found: ID does not exist" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.715123 4770 scope.go:117] "RemoveContainer" containerID="1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac" Dec 09 15:01:37 crc kubenswrapper[4770]: E1209 15:01:37.715515 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac\": container with ID starting with 1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac not found: ID does not exist" containerID="1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.715555 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac"} err="failed to get container status \"1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac\": rpc error: code = NotFound desc = could not find container \"1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac\": container with ID starting with 1ec35af7a611c2f8db8ea10fd4d05d905b9f3eb0fd8ac97db8b2dd1a85d2c8ac not found: ID does not exist" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.715576 4770 scope.go:117] "RemoveContainer" containerID="b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e" Dec 09 15:01:37 crc kubenswrapper[4770]: E1209 15:01:37.716065 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e\": container with ID starting with b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e not found: ID does not exist" containerID="b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e" Dec 09 15:01:37 crc kubenswrapper[4770]: I1209 15:01:37.716093 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e"} err="failed to get container status \"b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e\": rpc error: code = NotFound desc = could not find container \"b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e\": container with ID starting with b0242574a7564d5de9ce559003572e592289b864abbdb2bc988f74178c6cf22e not found: ID does not exist" Dec 09 15:01:38 crc kubenswrapper[4770]: I1209 15:01:38.594686 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:01:38 crc kubenswrapper[4770]: E1209 15:01:38.595299 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:01:38 crc kubenswrapper[4770]: I1209 15:01:38.600893 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" path="/var/lib/kubelet/pods/00be71da-d168-4d14-9319-5000d9be3cb5/volumes" Dec 09 15:01:43 crc kubenswrapper[4770]: E1209 15:01:43.591149 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:01:44 crc kubenswrapper[4770]: E1209 15:01:44.590566 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:01:51 crc kubenswrapper[4770]: I1209 15:01:51.588494 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:01:51 crc kubenswrapper[4770]: E1209 15:01:51.589353 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:01:56 crc kubenswrapper[4770]: E1209 15:01:56.591424 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:01:58 crc kubenswrapper[4770]: E1209 15:01:58.597465 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:02:06 crc kubenswrapper[4770]: I1209 15:02:06.589490 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:02:06 crc kubenswrapper[4770]: E1209 15:02:06.590505 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:02:09 crc kubenswrapper[4770]: E1209 15:02:09.590986 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:02:10 crc kubenswrapper[4770]: I1209 15:02:10.590954 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:02:10 crc kubenswrapper[4770]: E1209 15:02:10.724269 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:02:10 crc kubenswrapper[4770]: E1209 15:02:10.724325 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:02:10 crc kubenswrapper[4770]: E1209 15:02:10.724490 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:02:10 crc kubenswrapper[4770]: E1209 15:02:10.725779 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:02:17 crc kubenswrapper[4770]: I1209 15:02:17.588963 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:02:17 crc kubenswrapper[4770]: E1209 15:02:17.589933 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:02:22 crc kubenswrapper[4770]: E1209 15:02:22.725606 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:02:22 crc kubenswrapper[4770]: E1209 15:02:22.726233 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:02:22 crc kubenswrapper[4770]: E1209 15:02:22.726402 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:02:22 crc kubenswrapper[4770]: E1209 15:02:22.728515 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:02:25 crc kubenswrapper[4770]: E1209 15:02:25.590340 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.009549 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sdqcx"] Dec 09 15:02:29 crc kubenswrapper[4770]: E1209 15:02:29.010359 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="extract-utilities" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.010584 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="extract-utilities" Dec 09 15:02:29 crc kubenswrapper[4770]: E1209 15:02:29.010611 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="registry-server" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.010620 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="registry-server" Dec 09 15:02:29 crc kubenswrapper[4770]: E1209 15:02:29.010633 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="extract-content" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.010640 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="extract-content" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.010936 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="00be71da-d168-4d14-9319-5000d9be3cb5" containerName="registry-server" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.012959 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.022264 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdqcx"] Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.106917 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-utilities\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.107054 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-catalog-content\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.107510 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsx5\" (UniqueName: \"kubernetes.io/projected/1a29ef01-783f-4fae-8188-7f9479be0e92-kube-api-access-tvsx5\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.209684 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-utilities\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.209849 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-catalog-content\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.210040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsx5\" (UniqueName: \"kubernetes.io/projected/1a29ef01-783f-4fae-8188-7f9479be0e92-kube-api-access-tvsx5\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.210414 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-utilities\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.210489 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-catalog-content\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.234200 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsx5\" (UniqueName: \"kubernetes.io/projected/1a29ef01-783f-4fae-8188-7f9479be0e92-kube-api-access-tvsx5\") pod \"redhat-marketplace-sdqcx\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.339207 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:29 crc kubenswrapper[4770]: I1209 15:02:29.883861 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdqcx"] Dec 09 15:02:30 crc kubenswrapper[4770]: I1209 15:02:30.157031 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdqcx" event={"ID":"1a29ef01-783f-4fae-8188-7f9479be0e92","Type":"ContainerStarted","Data":"7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed"} Dec 09 15:02:30 crc kubenswrapper[4770]: I1209 15:02:30.157071 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdqcx" event={"ID":"1a29ef01-783f-4fae-8188-7f9479be0e92","Type":"ContainerStarted","Data":"d1c49a591bca6ea7b9545b0ce3ba979b2757d70b656912d87d6ab8e5e37469ab"} Dec 09 15:02:31 crc kubenswrapper[4770]: I1209 15:02:31.168833 4770 generic.go:334] "Generic (PLEG): container finished" podID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerID="7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed" exitCode=0 Dec 09 15:02:31 crc kubenswrapper[4770]: I1209 15:02:31.169038 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdqcx" event={"ID":"1a29ef01-783f-4fae-8188-7f9479be0e92","Type":"ContainerDied","Data":"7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed"} Dec 09 15:02:31 crc kubenswrapper[4770]: E1209 15:02:31.685844 4770 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = get image fs info unable to get usage for /var/lib/containers/storage/overlay-images: get disk usage for path /var/lib/containers/storage/overlay-images: lstat /var/lib/containers/storage/overlay-images/d6675231a89a81df8d2adc197f53733f98f986c5ed1a2f71b45f0acc973facef/.tmp-signatures2033869821: no such file or directory" Dec 09 15:02:31 crc kubenswrapper[4770]: E1209 15:02:31.685917 4770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: missing image stats: nil" Dec 09 15:02:32 crc kubenswrapper[4770]: I1209 15:02:32.184497 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdqcx" event={"ID":"1a29ef01-783f-4fae-8188-7f9479be0e92","Type":"ContainerStarted","Data":"54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295"} Dec 09 15:02:32 crc kubenswrapper[4770]: I1209 15:02:32.588337 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:02:32 crc kubenswrapper[4770]: E1209 15:02:32.588979 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:02:33 crc kubenswrapper[4770]: I1209 15:02:33.195857 4770 generic.go:334] "Generic (PLEG): container finished" podID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerID="54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295" exitCode=0 Dec 09 15:02:33 crc kubenswrapper[4770]: I1209 15:02:33.195907 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdqcx" event={"ID":"1a29ef01-783f-4fae-8188-7f9479be0e92","Type":"ContainerDied","Data":"54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295"} Dec 09 15:02:34 crc kubenswrapper[4770]: I1209 15:02:34.207556 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdqcx" event={"ID":"1a29ef01-783f-4fae-8188-7f9479be0e92","Type":"ContainerStarted","Data":"aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4"} Dec 09 15:02:34 crc kubenswrapper[4770]: I1209 15:02:34.236075 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sdqcx" podStartSLOduration=2.558085808 podStartE2EDuration="6.236053819s" podCreationTimestamp="2025-12-09 15:02:28 +0000 UTC" firstStartedPulling="2025-12-09 15:02:30.158986442 +0000 UTC m=+2382.055188578" lastFinishedPulling="2025-12-09 15:02:33.836954453 +0000 UTC m=+2385.733156589" observedRunningTime="2025-12-09 15:02:34.235096143 +0000 UTC m=+2386.131298279" watchObservedRunningTime="2025-12-09 15:02:34.236053819 +0000 UTC m=+2386.132255955" Dec 09 15:02:36 crc kubenswrapper[4770]: E1209 15:02:36.592929 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:02:38 crc kubenswrapper[4770]: E1209 15:02:38.598490 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:02:39 crc kubenswrapper[4770]: I1209 15:02:39.340914 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:39 crc kubenswrapper[4770]: I1209 15:02:39.340975 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:39 crc kubenswrapper[4770]: I1209 15:02:39.392719 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:40 crc kubenswrapper[4770]: I1209 15:02:40.308999 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:40 crc kubenswrapper[4770]: I1209 15:02:40.355345 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdqcx"] Dec 09 15:02:42 crc kubenswrapper[4770]: I1209 15:02:42.277922 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sdqcx" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerName="registry-server" containerID="cri-o://aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4" gracePeriod=2 Dec 09 15:02:43 crc kubenswrapper[4770]: I1209 15:02:43.928697 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:43 crc kubenswrapper[4770]: I1209 15:02:43.944835 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvsx5\" (UniqueName: \"kubernetes.io/projected/1a29ef01-783f-4fae-8188-7f9479be0e92-kube-api-access-tvsx5\") pod \"1a29ef01-783f-4fae-8188-7f9479be0e92\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " Dec 09 15:02:43 crc kubenswrapper[4770]: I1209 15:02:43.945041 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-utilities\") pod \"1a29ef01-783f-4fae-8188-7f9479be0e92\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " Dec 09 15:02:43 crc kubenswrapper[4770]: I1209 15:02:43.945306 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-catalog-content\") pod \"1a29ef01-783f-4fae-8188-7f9479be0e92\" (UID: \"1a29ef01-783f-4fae-8188-7f9479be0e92\") " Dec 09 15:02:43 crc kubenswrapper[4770]: I1209 15:02:43.946080 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-utilities" (OuterVolumeSpecName: "utilities") pod "1a29ef01-783f-4fae-8188-7f9479be0e92" (UID: "1a29ef01-783f-4fae-8188-7f9479be0e92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:02:43 crc kubenswrapper[4770]: I1209 15:02:43.953542 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a29ef01-783f-4fae-8188-7f9479be0e92-kube-api-access-tvsx5" (OuterVolumeSpecName: "kube-api-access-tvsx5") pod "1a29ef01-783f-4fae-8188-7f9479be0e92" (UID: "1a29ef01-783f-4fae-8188-7f9479be0e92"). InnerVolumeSpecName "kube-api-access-tvsx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:02:43 crc kubenswrapper[4770]: I1209 15:02:43.980069 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a29ef01-783f-4fae-8188-7f9479be0e92" (UID: "1a29ef01-783f-4fae-8188-7f9479be0e92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.047814 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.047850 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvsx5\" (UniqueName: \"kubernetes.io/projected/1a29ef01-783f-4fae-8188-7f9479be0e92-kube-api-access-tvsx5\") on node \"crc\" DevicePath \"\"" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.047864 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29ef01-783f-4fae-8188-7f9479be0e92-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.307680 4770 generic.go:334] "Generic (PLEG): container finished" podID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerID="aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4" exitCode=0 Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.307783 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdqcx" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.307775 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdqcx" event={"ID":"1a29ef01-783f-4fae-8188-7f9479be0e92","Type":"ContainerDied","Data":"aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4"} Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.308225 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdqcx" event={"ID":"1a29ef01-783f-4fae-8188-7f9479be0e92","Type":"ContainerDied","Data":"d1c49a591bca6ea7b9545b0ce3ba979b2757d70b656912d87d6ab8e5e37469ab"} Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.308250 4770 scope.go:117] "RemoveContainer" containerID="aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.351772 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdqcx"] Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.355199 4770 scope.go:117] "RemoveContainer" containerID="54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.365373 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdqcx"] Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.378296 4770 scope.go:117] "RemoveContainer" containerID="7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.434764 4770 scope.go:117] "RemoveContainer" containerID="aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4" Dec 09 15:02:44 crc kubenswrapper[4770]: E1209 15:02:44.435297 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4\": container with ID starting with aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4 not found: ID does not exist" containerID="aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.435346 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4"} err="failed to get container status \"aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4\": rpc error: code = NotFound desc = could not find container \"aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4\": container with ID starting with aa09e906c8e64a897ecbe94943cb5cdb8e5a27ee567a5c8fbd800c2dbd874be4 not found: ID does not exist" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.435379 4770 scope.go:117] "RemoveContainer" containerID="54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295" Dec 09 15:02:44 crc kubenswrapper[4770]: E1209 15:02:44.435721 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295\": container with ID starting with 54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295 not found: ID does not exist" containerID="54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.435802 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295"} err="failed to get container status \"54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295\": rpc error: code = NotFound desc = could not find container \"54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295\": container with ID starting with 54e93c8c59d5e60d62ca9cb8703e19253f9ebccf06226626bc32abd44067f295 not found: ID does not exist" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.435844 4770 scope.go:117] "RemoveContainer" containerID="7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed" Dec 09 15:02:44 crc kubenswrapper[4770]: E1209 15:02:44.436265 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed\": container with ID starting with 7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed not found: ID does not exist" containerID="7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.436298 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed"} err="failed to get container status \"7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed\": rpc error: code = NotFound desc = could not find container \"7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed\": container with ID starting with 7dac076c1fbfe5fcf0ea9de7fc8b84c4fa4d81b5f1ccd89768592a7279f7f2ed not found: ID does not exist" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.589026 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:02:44 crc kubenswrapper[4770]: E1209 15:02:44.589325 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:02:44 crc kubenswrapper[4770]: I1209 15:02:44.600916 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" path="/var/lib/kubelet/pods/1a29ef01-783f-4fae-8188-7f9479be0e92/volumes" Dec 09 15:02:48 crc kubenswrapper[4770]: E1209 15:02:48.606025 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:02:52 crc kubenswrapper[4770]: E1209 15:02:52.590887 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:02:57 crc kubenswrapper[4770]: I1209 15:02:57.588555 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:02:57 crc kubenswrapper[4770]: E1209 15:02:57.589328 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:03:00 crc kubenswrapper[4770]: E1209 15:03:00.592687 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:03:03 crc kubenswrapper[4770]: I1209 15:03:03.498186 4770 generic.go:334] "Generic (PLEG): container finished" podID="145ea2d4-9119-435e-aac0-ac0ee9eb29bf" containerID="8a6e95b468316cc8173d8805eac4aaa91fc6ff18ff661692eeefd2e026c57bcb" exitCode=2 Dec 09 15:03:03 crc kubenswrapper[4770]: I1209 15:03:03.498341 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" event={"ID":"145ea2d4-9119-435e-aac0-ac0ee9eb29bf","Type":"ContainerDied","Data":"8a6e95b468316cc8173d8805eac4aaa91fc6ff18ff661692eeefd2e026c57bcb"} Dec 09 15:03:03 crc kubenswrapper[4770]: E1209 15:03:03.590992 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.065704 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.268881 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-ssh-key\") pod \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.269256 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5hl2\" (UniqueName: \"kubernetes.io/projected/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-kube-api-access-d5hl2\") pod \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.269395 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-inventory\") pod \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\" (UID: \"145ea2d4-9119-435e-aac0-ac0ee9eb29bf\") " Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.275244 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-kube-api-access-d5hl2" (OuterVolumeSpecName: "kube-api-access-d5hl2") pod "145ea2d4-9119-435e-aac0-ac0ee9eb29bf" (UID: "145ea2d4-9119-435e-aac0-ac0ee9eb29bf"). InnerVolumeSpecName "kube-api-access-d5hl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.309653 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-inventory" (OuterVolumeSpecName: "inventory") pod "145ea2d4-9119-435e-aac0-ac0ee9eb29bf" (UID: "145ea2d4-9119-435e-aac0-ac0ee9eb29bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.324607 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "145ea2d4-9119-435e-aac0-ac0ee9eb29bf" (UID: "145ea2d4-9119-435e-aac0-ac0ee9eb29bf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.373394 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5hl2\" (UniqueName: \"kubernetes.io/projected/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-kube-api-access-d5hl2\") on node \"crc\" DevicePath \"\"" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.373441 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.373454 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/145ea2d4-9119-435e-aac0-ac0ee9eb29bf-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.524639 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" event={"ID":"145ea2d4-9119-435e-aac0-ac0ee9eb29bf","Type":"ContainerDied","Data":"d4b134a770195fcd4f044f84fdf6a1620359d87ab64355403d3ca1e7679879d7"} Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.524684 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b134a770195fcd4f044f84fdf6a1620359d87ab64355403d3ca1e7679879d7" Dec 09 15:03:05 crc kubenswrapper[4770]: I1209 15:03:05.524708 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh" Dec 09 15:03:08 crc kubenswrapper[4770]: I1209 15:03:08.601772 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:03:08 crc kubenswrapper[4770]: E1209 15:03:08.602968 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.031914 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n"] Dec 09 15:03:13 crc kubenswrapper[4770]: E1209 15:03:13.034629 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerName="registry-server" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.034745 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerName="registry-server" Dec 09 15:03:13 crc kubenswrapper[4770]: E1209 15:03:13.034832 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145ea2d4-9119-435e-aac0-ac0ee9eb29bf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.034891 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="145ea2d4-9119-435e-aac0-ac0ee9eb29bf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:03:13 crc kubenswrapper[4770]: E1209 15:03:13.034971 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerName="extract-utilities" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.035028 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerName="extract-utilities" Dec 09 15:03:13 crc kubenswrapper[4770]: E1209 15:03:13.035103 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerName="extract-content" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.035232 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerName="extract-content" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.035517 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="145ea2d4-9119-435e-aac0-ac0ee9eb29bf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.035597 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a29ef01-783f-4fae-8188-7f9479be0e92" containerName="registry-server" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.036582 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.039177 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.040178 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.040236 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.047678 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n"] Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.050110 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.167643 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvq5\" (UniqueName: \"kubernetes.io/projected/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-kube-api-access-hgvq5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.167707 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.168385 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.270478 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.270550 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvq5\" (UniqueName: \"kubernetes.io/projected/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-kube-api-access-hgvq5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.270582 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.278426 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.287191 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.288808 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvq5\" (UniqueName: \"kubernetes.io/projected/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-kube-api-access-hgvq5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5x54n\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:13 crc kubenswrapper[4770]: I1209 15:03:13.369086 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:03:14 crc kubenswrapper[4770]: I1209 15:03:14.031442 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n"] Dec 09 15:03:14 crc kubenswrapper[4770]: E1209 15:03:14.594125 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:03:14 crc kubenswrapper[4770]: I1209 15:03:14.670876 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" event={"ID":"d2fd4634-93f2-4bc7-8f5b-7cf34397626c","Type":"ContainerStarted","Data":"f34aff05e607078d69be5d4b7eaac1a8e980d9dfd7a7fbd0e0b86c621ce34847"} Dec 09 15:03:15 crc kubenswrapper[4770]: E1209 15:03:15.590912 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:03:15 crc kubenswrapper[4770]: I1209 15:03:15.683368 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" event={"ID":"d2fd4634-93f2-4bc7-8f5b-7cf34397626c","Type":"ContainerStarted","Data":"5e387f41a2fc156ce2431cda2b94a2d36912a0a025029d33932829f790ac48c3"} Dec 09 15:03:15 crc kubenswrapper[4770]: I1209 15:03:15.714027 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" podStartSLOduration=2.209265504 podStartE2EDuration="2.714010227s" podCreationTimestamp="2025-12-09 15:03:13 +0000 UTC" firstStartedPulling="2025-12-09 15:03:14.036240776 +0000 UTC m=+2425.932442912" lastFinishedPulling="2025-12-09 15:03:14.540985499 +0000 UTC m=+2426.437187635" observedRunningTime="2025-12-09 15:03:15.712693901 +0000 UTC m=+2427.608896047" watchObservedRunningTime="2025-12-09 15:03:15.714010227 +0000 UTC m=+2427.610212363" Dec 09 15:03:21 crc kubenswrapper[4770]: I1209 15:03:21.588153 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:03:21 crc kubenswrapper[4770]: E1209 15:03:21.590468 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:03:26 crc kubenswrapper[4770]: E1209 15:03:26.590974 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:03:26 crc kubenswrapper[4770]: E1209 15:03:26.592036 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:03:35 crc kubenswrapper[4770]: I1209 15:03:35.589063 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:03:35 crc kubenswrapper[4770]: E1209 15:03:35.589793 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:03:40 crc kubenswrapper[4770]: E1209 15:03:40.592125 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:03:40 crc kubenswrapper[4770]: E1209 15:03:40.594184 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:03:45 crc kubenswrapper[4770]: I1209 15:03:45.722460 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:03:45 crc kubenswrapper[4770]: E1209 15:03:45.724960 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:03:53 crc kubenswrapper[4770]: E1209 15:03:53.591966 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:03:54 crc kubenswrapper[4770]: E1209 15:03:54.591931 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:03:56 crc kubenswrapper[4770]: I1209 15:03:56.588863 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:03:56 crc kubenswrapper[4770]: E1209 15:03:56.589856 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:04:04 crc kubenswrapper[4770]: E1209 15:04:04.592262 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:04:06 crc kubenswrapper[4770]: E1209 15:04:06.590949 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:04:11 crc kubenswrapper[4770]: I1209 15:04:11.589328 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:04:11 crc kubenswrapper[4770]: E1209 15:04:11.590029 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:04:16 crc kubenswrapper[4770]: E1209 15:04:16.590693 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:04:17 crc kubenswrapper[4770]: E1209 15:04:17.593395 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:04:24 crc kubenswrapper[4770]: I1209 15:04:24.588637 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:04:24 crc kubenswrapper[4770]: E1209 15:04:24.589394 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:04:28 crc kubenswrapper[4770]: E1209 15:04:28.598500 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:04:31 crc kubenswrapper[4770]: E1209 15:04:31.590222 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:04:35 crc kubenswrapper[4770]: I1209 15:04:35.589285 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:04:35 crc kubenswrapper[4770]: E1209 15:04:35.590309 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:04:41 crc kubenswrapper[4770]: E1209 15:04:41.591139 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:04:43 crc kubenswrapper[4770]: E1209 15:04:43.593941 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:04:48 crc kubenswrapper[4770]: I1209 15:04:48.600469 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:04:48 crc kubenswrapper[4770]: E1209 15:04:48.601252 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:04:56 crc kubenswrapper[4770]: E1209 15:04:56.592687 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:04:56 crc kubenswrapper[4770]: E1209 15:04:56.592720 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:05:03 crc kubenswrapper[4770]: I1209 15:05:03.589002 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:05:03 crc kubenswrapper[4770]: E1209 15:05:03.590457 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:05:08 crc kubenswrapper[4770]: E1209 15:05:08.599745 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:05:10 crc kubenswrapper[4770]: E1209 15:05:10.590523 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:05:17 crc kubenswrapper[4770]: I1209 15:05:17.589196 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:05:17 crc kubenswrapper[4770]: E1209 15:05:17.590238 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:05:21 crc kubenswrapper[4770]: E1209 15:05:21.590796 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:05:22 crc kubenswrapper[4770]: E1209 15:05:22.590026 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:05:28 crc kubenswrapper[4770]: I1209 15:05:28.596960 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:05:28 crc kubenswrapper[4770]: E1209 15:05:28.598979 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:05:33 crc kubenswrapper[4770]: E1209 15:05:33.590626 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:05:33 crc kubenswrapper[4770]: E1209 15:05:33.590702 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:05:40 crc kubenswrapper[4770]: I1209 15:05:40.588546 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:05:40 crc kubenswrapper[4770]: E1209 15:05:40.589511 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:05:48 crc kubenswrapper[4770]: E1209 15:05:48.598622 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:05:48 crc kubenswrapper[4770]: E1209 15:05:48.598641 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:05:55 crc kubenswrapper[4770]: I1209 15:05:55.589068 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:05:55 crc kubenswrapper[4770]: E1209 15:05:55.592226 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:05:59 crc kubenswrapper[4770]: E1209 15:05:59.590924 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:05:59 crc kubenswrapper[4770]: E1209 15:05:59.590963 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:06:09 crc kubenswrapper[4770]: I1209 15:06:09.589187 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:06:09 crc kubenswrapper[4770]: E1209 15:06:09.590162 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:06:11 crc kubenswrapper[4770]: E1209 15:06:11.591096 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:06:12 crc kubenswrapper[4770]: E1209 15:06:12.590953 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:06:23 crc kubenswrapper[4770]: I1209 15:06:23.589161 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:06:23 crc kubenswrapper[4770]: E1209 15:06:23.591386 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:06:24 crc kubenswrapper[4770]: I1209 15:06:24.601279 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"61d9d30a1148247b9892340a4079ad31de07ff04e2553c2aa4bf985ae5fb277b"} Dec 09 15:06:26 crc kubenswrapper[4770]: E1209 15:06:26.597384 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:06:35 crc kubenswrapper[4770]: E1209 15:06:35.591591 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:06:37 crc kubenswrapper[4770]: E1209 15:06:37.590302 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:06:46 crc kubenswrapper[4770]: E1209 15:06:46.592105 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:06:48 crc kubenswrapper[4770]: E1209 15:06:48.640309 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:06:59 crc kubenswrapper[4770]: E1209 15:06:59.591752 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:07:00 crc kubenswrapper[4770]: E1209 15:07:00.655802 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:07:11 crc kubenswrapper[4770]: I1209 15:07:11.591095 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:07:11 crc kubenswrapper[4770]: E1209 15:07:11.694499 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:07:11 crc kubenswrapper[4770]: E1209 15:07:11.694561 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:07:11 crc kubenswrapper[4770]: E1209 15:07:11.694743 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:07:11 crc kubenswrapper[4770]: E1209 15:07:11.695907 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:07:15 crc kubenswrapper[4770]: E1209 15:07:15.591115 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:07:22 crc kubenswrapper[4770]: E1209 15:07:22.591352 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:07:29 crc kubenswrapper[4770]: E1209 15:07:29.716877 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:07:29 crc kubenswrapper[4770]: E1209 15:07:29.717352 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:07:29 crc kubenswrapper[4770]: E1209 15:07:29.717476 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:07:29 crc kubenswrapper[4770]: E1209 15:07:29.719166 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:07:36 crc kubenswrapper[4770]: E1209 15:07:36.592239 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:07:43 crc kubenswrapper[4770]: E1209 15:07:43.590817 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:07:50 crc kubenswrapper[4770]: E1209 15:07:50.589961 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:07:57 crc kubenswrapper[4770]: E1209 15:07:57.591706 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:08:04 crc kubenswrapper[4770]: E1209 15:08:04.591199 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:08:11 crc kubenswrapper[4770]: E1209 15:08:11.590815 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:08:18 crc kubenswrapper[4770]: E1209 15:08:18.599382 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:08:26 crc kubenswrapper[4770]: E1209 15:08:26.591939 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:08:32 crc kubenswrapper[4770]: E1209 15:08:32.591973 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:08:40 crc kubenswrapper[4770]: E1209 15:08:40.591353 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:08:44 crc kubenswrapper[4770]: I1209 15:08:44.243656 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:08:44 crc kubenswrapper[4770]: I1209 15:08:44.244262 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:08:47 crc kubenswrapper[4770]: E1209 15:08:47.591071 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:08:47 crc kubenswrapper[4770]: I1209 15:08:47.862969 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gj2sm"] Dec 09 15:08:47 crc kubenswrapper[4770]: I1209 15:08:47.866022 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:47 crc kubenswrapper[4770]: I1209 15:08:47.883280 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gj2sm"] Dec 09 15:08:47 crc kubenswrapper[4770]: I1209 15:08:47.902402 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-utilities\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:47 crc kubenswrapper[4770]: I1209 15:08:47.902483 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-catalog-content\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:47 crc kubenswrapper[4770]: I1209 15:08:47.902513 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stlv7\" (UniqueName: \"kubernetes.io/projected/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-kube-api-access-stlv7\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:48 crc kubenswrapper[4770]: I1209 15:08:48.004538 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-utilities\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:48 crc kubenswrapper[4770]: I1209 15:08:48.004638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-catalog-content\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:48 crc kubenswrapper[4770]: I1209 15:08:48.004668 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stlv7\" (UniqueName: \"kubernetes.io/projected/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-kube-api-access-stlv7\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:48 crc kubenswrapper[4770]: I1209 15:08:48.005091 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-utilities\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:48 crc kubenswrapper[4770]: I1209 15:08:48.005337 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-catalog-content\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:48 crc kubenswrapper[4770]: I1209 15:08:48.029437 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stlv7\" (UniqueName: \"kubernetes.io/projected/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-kube-api-access-stlv7\") pod \"redhat-operators-gj2sm\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:48 crc kubenswrapper[4770]: I1209 15:08:48.187587 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:48 crc kubenswrapper[4770]: I1209 15:08:48.697264 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gj2sm"] Dec 09 15:08:49 crc kubenswrapper[4770]: I1209 15:08:49.007454 4770 generic.go:334] "Generic (PLEG): container finished" podID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerID="de04db88947917f386bce5d2e716309668203c0dbcc352d67bbeba81dd648411" exitCode=0 Dec 09 15:08:49 crc kubenswrapper[4770]: I1209 15:08:49.007517 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gj2sm" event={"ID":"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9","Type":"ContainerDied","Data":"de04db88947917f386bce5d2e716309668203c0dbcc352d67bbeba81dd648411"} Dec 09 15:08:49 crc kubenswrapper[4770]: I1209 15:08:49.007833 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gj2sm" event={"ID":"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9","Type":"ContainerStarted","Data":"62b11df04e7dd2451ebd4378b0af9eeae054873efe938a8be150a7dbb8810cf2"} Dec 09 15:08:50 crc kubenswrapper[4770]: I1209 15:08:50.023115 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gj2sm" event={"ID":"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9","Type":"ContainerStarted","Data":"db6309fbaded4c13f37562373ab87e09c28efe199274d930a5205eedc59e58ff"} Dec 09 15:08:54 crc kubenswrapper[4770]: I1209 15:08:54.069182 4770 generic.go:334] "Generic (PLEG): container finished" podID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerID="db6309fbaded4c13f37562373ab87e09c28efe199274d930a5205eedc59e58ff" exitCode=0 Dec 09 15:08:54 crc kubenswrapper[4770]: I1209 15:08:54.069641 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gj2sm" event={"ID":"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9","Type":"ContainerDied","Data":"db6309fbaded4c13f37562373ab87e09c28efe199274d930a5205eedc59e58ff"} Dec 09 15:08:55 crc kubenswrapper[4770]: I1209 15:08:55.081039 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gj2sm" event={"ID":"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9","Type":"ContainerStarted","Data":"e78af74fafe93c1664ae98ef95f56314d69f3babaf019208076992aec59d2e3a"} Dec 09 15:08:55 crc kubenswrapper[4770]: I1209 15:08:55.106933 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gj2sm" podStartSLOduration=2.445889603 podStartE2EDuration="8.106896891s" podCreationTimestamp="2025-12-09 15:08:47 +0000 UTC" firstStartedPulling="2025-12-09 15:08:49.009233748 +0000 UTC m=+2760.905435894" lastFinishedPulling="2025-12-09 15:08:54.670241046 +0000 UTC m=+2766.566443182" observedRunningTime="2025-12-09 15:08:55.09701677 +0000 UTC m=+2766.993218916" watchObservedRunningTime="2025-12-09 15:08:55.106896891 +0000 UTC m=+2767.003099027" Dec 09 15:08:55 crc kubenswrapper[4770]: E1209 15:08:55.590756 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:08:58 crc kubenswrapper[4770]: I1209 15:08:58.188768 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:58 crc kubenswrapper[4770]: I1209 15:08:58.189094 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:08:59 crc kubenswrapper[4770]: I1209 15:08:59.241619 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gj2sm" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="registry-server" probeResult="failure" output=< Dec 09 15:08:59 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Dec 09 15:08:59 crc kubenswrapper[4770]: > Dec 09 15:09:00 crc kubenswrapper[4770]: E1209 15:09:00.591014 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:09:08 crc kubenswrapper[4770]: I1209 15:09:08.238612 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:09:08 crc kubenswrapper[4770]: I1209 15:09:08.299587 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:09:08 crc kubenswrapper[4770]: I1209 15:09:08.477306 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gj2sm"] Dec 09 15:09:10 crc kubenswrapper[4770]: I1209 15:09:10.223090 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gj2sm" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="registry-server" containerID="cri-o://e78af74fafe93c1664ae98ef95f56314d69f3babaf019208076992aec59d2e3a" gracePeriod=2 Dec 09 15:09:10 crc kubenswrapper[4770]: E1209 15:09:10.589449 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:09:11 crc kubenswrapper[4770]: I1209 15:09:11.240238 4770 generic.go:334] "Generic (PLEG): container finished" podID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerID="e78af74fafe93c1664ae98ef95f56314d69f3babaf019208076992aec59d2e3a" exitCode=0 Dec 09 15:09:11 crc kubenswrapper[4770]: I1209 15:09:11.240297 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gj2sm" event={"ID":"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9","Type":"ContainerDied","Data":"e78af74fafe93c1664ae98ef95f56314d69f3babaf019208076992aec59d2e3a"} Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.252109 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gj2sm" event={"ID":"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9","Type":"ContainerDied","Data":"62b11df04e7dd2451ebd4378b0af9eeae054873efe938a8be150a7dbb8810cf2"} Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.252483 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62b11df04e7dd2451ebd4378b0af9eeae054873efe938a8be150a7dbb8810cf2" Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.286005 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.417554 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-utilities\") pod \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.417619 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stlv7\" (UniqueName: \"kubernetes.io/projected/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-kube-api-access-stlv7\") pod \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.417713 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-catalog-content\") pod \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\" (UID: \"982c286c-9d13-4ff1-bc59-6cdfdc2eaee9\") " Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.418532 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-utilities" (OuterVolumeSpecName: "utilities") pod "982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" (UID: "982c286c-9d13-4ff1-bc59-6cdfdc2eaee9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.424998 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-kube-api-access-stlv7" (OuterVolumeSpecName: "kube-api-access-stlv7") pod "982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" (UID: "982c286c-9d13-4ff1-bc59-6cdfdc2eaee9"). InnerVolumeSpecName "kube-api-access-stlv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.520021 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.520058 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stlv7\" (UniqueName: \"kubernetes.io/projected/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-kube-api-access-stlv7\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.528195 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" (UID: "982c286c-9d13-4ff1-bc59-6cdfdc2eaee9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:09:12 crc kubenswrapper[4770]: E1209 15:09:12.591921 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:09:12 crc kubenswrapper[4770]: I1209 15:09:12.621932 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:13 crc kubenswrapper[4770]: I1209 15:09:13.259410 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gj2sm" Dec 09 15:09:13 crc kubenswrapper[4770]: I1209 15:09:13.285126 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gj2sm"] Dec 09 15:09:13 crc kubenswrapper[4770]: I1209 15:09:13.299110 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gj2sm"] Dec 09 15:09:14 crc kubenswrapper[4770]: I1209 15:09:14.245016 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:09:14 crc kubenswrapper[4770]: I1209 15:09:14.245309 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:09:14 crc kubenswrapper[4770]: I1209 15:09:14.602220 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" path="/var/lib/kubelet/pods/982c286c-9d13-4ff1-bc59-6cdfdc2eaee9/volumes" Dec 09 15:09:22 crc kubenswrapper[4770]: E1209 15:09:22.590594 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:09:27 crc kubenswrapper[4770]: E1209 15:09:27.590424 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:09:31 crc kubenswrapper[4770]: I1209 15:09:31.442706 4770 generic.go:334] "Generic (PLEG): container finished" podID="d2fd4634-93f2-4bc7-8f5b-7cf34397626c" containerID="5e387f41a2fc156ce2431cda2b94a2d36912a0a025029d33932829f790ac48c3" exitCode=2 Dec 09 15:09:31 crc kubenswrapper[4770]: I1209 15:09:31.442762 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" event={"ID":"d2fd4634-93f2-4bc7-8f5b-7cf34397626c","Type":"ContainerDied","Data":"5e387f41a2fc156ce2431cda2b94a2d36912a0a025029d33932829f790ac48c3"} Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.014474 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.151154 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-ssh-key\") pod \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.151334 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgvq5\" (UniqueName: \"kubernetes.io/projected/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-kube-api-access-hgvq5\") pod \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.151529 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-inventory\") pod \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\" (UID: \"d2fd4634-93f2-4bc7-8f5b-7cf34397626c\") " Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.156988 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-kube-api-access-hgvq5" (OuterVolumeSpecName: "kube-api-access-hgvq5") pod "d2fd4634-93f2-4bc7-8f5b-7cf34397626c" (UID: "d2fd4634-93f2-4bc7-8f5b-7cf34397626c"). InnerVolumeSpecName "kube-api-access-hgvq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.178825 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-inventory" (OuterVolumeSpecName: "inventory") pod "d2fd4634-93f2-4bc7-8f5b-7cf34397626c" (UID: "d2fd4634-93f2-4bc7-8f5b-7cf34397626c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.188784 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d2fd4634-93f2-4bc7-8f5b-7cf34397626c" (UID: "d2fd4634-93f2-4bc7-8f5b-7cf34397626c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.254123 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.254664 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgvq5\" (UniqueName: \"kubernetes.io/projected/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-kube-api-access-hgvq5\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.254762 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2fd4634-93f2-4bc7-8f5b-7cf34397626c-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.490271 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" event={"ID":"d2fd4634-93f2-4bc7-8f5b-7cf34397626c","Type":"ContainerDied","Data":"f34aff05e607078d69be5d4b7eaac1a8e980d9dfd7a7fbd0e0b86c621ce34847"} Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.490319 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f34aff05e607078d69be5d4b7eaac1a8e980d9dfd7a7fbd0e0b86c621ce34847" Dec 09 15:09:33 crc kubenswrapper[4770]: I1209 15:09:33.490406 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5x54n" Dec 09 15:09:33 crc kubenswrapper[4770]: E1209 15:09:33.635561 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:09:33 crc kubenswrapper[4770]: E1209 15:09:33.888829 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2fd4634_93f2_4bc7_8f5b_7cf34397626c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2fd4634_93f2_4bc7_8f5b_7cf34397626c.slice/crio-f34aff05e607078d69be5d4b7eaac1a8e980d9dfd7a7fbd0e0b86c621ce34847\": RecentStats: unable to find data in memory cache]" Dec 09 15:09:40 crc kubenswrapper[4770]: E1209 15:09:40.590375 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:09:44 crc kubenswrapper[4770]: I1209 15:09:44.243451 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:09:44 crc kubenswrapper[4770]: I1209 15:09:44.244500 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:09:44 crc kubenswrapper[4770]: I1209 15:09:44.244576 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:09:44 crc kubenswrapper[4770]: I1209 15:09:44.245954 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61d9d30a1148247b9892340a4079ad31de07ff04e2553c2aa4bf985ae5fb277b"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:09:44 crc kubenswrapper[4770]: I1209 15:09:44.246050 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://61d9d30a1148247b9892340a4079ad31de07ff04e2553c2aa4bf985ae5fb277b" gracePeriod=600 Dec 09 15:09:44 crc kubenswrapper[4770]: I1209 15:09:44.596751 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="61d9d30a1148247b9892340a4079ad31de07ff04e2553c2aa4bf985ae5fb277b" exitCode=0 Dec 09 15:09:44 crc kubenswrapper[4770]: I1209 15:09:44.600529 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"61d9d30a1148247b9892340a4079ad31de07ff04e2553c2aa4bf985ae5fb277b"} Dec 09 15:09:44 crc kubenswrapper[4770]: I1209 15:09:44.600640 4770 scope.go:117] "RemoveContainer" containerID="6fbc07683b524e566381bfe3cb1ae0274cd1923db02fd5f3bf3a058cc93d3929" Dec 09 15:09:45 crc kubenswrapper[4770]: I1209 15:09:45.612188 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739"} Dec 09 15:09:46 crc kubenswrapper[4770]: E1209 15:09:46.591376 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.041524 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz"] Dec 09 15:09:50 crc kubenswrapper[4770]: E1209 15:09:50.042607 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fd4634-93f2-4bc7-8f5b-7cf34397626c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.042634 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fd4634-93f2-4bc7-8f5b-7cf34397626c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:09:50 crc kubenswrapper[4770]: E1209 15:09:50.042694 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="registry-server" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.042701 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="registry-server" Dec 09 15:09:50 crc kubenswrapper[4770]: E1209 15:09:50.042710 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="extract-utilities" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.042745 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="extract-utilities" Dec 09 15:09:50 crc kubenswrapper[4770]: E1209 15:09:50.042763 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="extract-content" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.042769 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="extract-content" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.043002 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2fd4634-93f2-4bc7-8f5b-7cf34397626c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.043029 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="982c286c-9d13-4ff1-bc59-6cdfdc2eaee9" containerName="registry-server" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.043891 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.046628 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.047171 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.047327 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.047479 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.056234 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz"] Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.117355 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.117873 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.117961 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7dzw\" (UniqueName: \"kubernetes.io/projected/aa029105-c93b-48e9-8331-76d58f4794d8-kube-api-access-j7dzw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.219842 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.219908 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7dzw\" (UniqueName: \"kubernetes.io/projected/aa029105-c93b-48e9-8331-76d58f4794d8-kube-api-access-j7dzw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.219948 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.227960 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.232601 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.237774 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7dzw\" (UniqueName: \"kubernetes.io/projected/aa029105-c93b-48e9-8331-76d58f4794d8-kube-api-access-j7dzw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.412531 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:09:50 crc kubenswrapper[4770]: I1209 15:09:50.973723 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz"] Dec 09 15:09:50 crc kubenswrapper[4770]: W1209 15:09:50.975943 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa029105_c93b_48e9_8331_76d58f4794d8.slice/crio-6896f6d96fb696f3b9191a2b3a1d89e753e7dbd991aa141ed655a65ab70eafe6 WatchSource:0}: Error finding container 6896f6d96fb696f3b9191a2b3a1d89e753e7dbd991aa141ed655a65ab70eafe6: Status 404 returned error can't find the container with id 6896f6d96fb696f3b9191a2b3a1d89e753e7dbd991aa141ed655a65ab70eafe6 Dec 09 15:09:51 crc kubenswrapper[4770]: I1209 15:09:51.678236 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" event={"ID":"aa029105-c93b-48e9-8331-76d58f4794d8","Type":"ContainerStarted","Data":"18c69d244e10837b22c9025005ed61cee7359f855b57c1e682086878de428cc8"} Dec 09 15:09:51 crc kubenswrapper[4770]: I1209 15:09:51.678817 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" event={"ID":"aa029105-c93b-48e9-8331-76d58f4794d8","Type":"ContainerStarted","Data":"6896f6d96fb696f3b9191a2b3a1d89e753e7dbd991aa141ed655a65ab70eafe6"} Dec 09 15:09:51 crc kubenswrapper[4770]: I1209 15:09:51.712789 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" podStartSLOduration=1.317459311 podStartE2EDuration="1.712717257s" podCreationTimestamp="2025-12-09 15:09:50 +0000 UTC" firstStartedPulling="2025-12-09 15:09:50.97871622 +0000 UTC m=+2822.874918356" lastFinishedPulling="2025-12-09 15:09:51.373974166 +0000 UTC m=+2823.270176302" observedRunningTime="2025-12-09 15:09:51.703077281 +0000 UTC m=+2823.599279427" watchObservedRunningTime="2025-12-09 15:09:51.712717257 +0000 UTC m=+2823.608919393" Dec 09 15:09:52 crc kubenswrapper[4770]: E1209 15:09:52.591622 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:09:59 crc kubenswrapper[4770]: E1209 15:09:59.592034 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:10:03 crc kubenswrapper[4770]: E1209 15:10:03.590371 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:10:12 crc kubenswrapper[4770]: E1209 15:10:12.591623 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:10:16 crc kubenswrapper[4770]: E1209 15:10:16.592313 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:10:27 crc kubenswrapper[4770]: E1209 15:10:27.591241 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:10:29 crc kubenswrapper[4770]: E1209 15:10:29.590293 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:10:39 crc kubenswrapper[4770]: E1209 15:10:39.591911 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:10:42 crc kubenswrapper[4770]: E1209 15:10:42.591323 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:10:51 crc kubenswrapper[4770]: E1209 15:10:51.591216 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:10:57 crc kubenswrapper[4770]: E1209 15:10:57.590606 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:11:05 crc kubenswrapper[4770]: E1209 15:11:05.592270 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:11:08 crc kubenswrapper[4770]: E1209 15:11:08.604721 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:11:16 crc kubenswrapper[4770]: E1209 15:11:16.592548 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:11:23 crc kubenswrapper[4770]: E1209 15:11:23.591029 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:11:28 crc kubenswrapper[4770]: E1209 15:11:28.599901 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:11:35 crc kubenswrapper[4770]: E1209 15:11:35.591327 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:11:42 crc kubenswrapper[4770]: E1209 15:11:42.592071 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:11:44 crc kubenswrapper[4770]: I1209 15:11:44.244045 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:11:44 crc kubenswrapper[4770]: I1209 15:11:44.244118 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:11:50 crc kubenswrapper[4770]: E1209 15:11:50.591753 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:11:56 crc kubenswrapper[4770]: E1209 15:11:56.592625 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:12:01 crc kubenswrapper[4770]: E1209 15:12:01.590607 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:12:08 crc kubenswrapper[4770]: E1209 15:12:08.606712 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:12:12 crc kubenswrapper[4770]: I1209 15:12:12.591338 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:12:12 crc kubenswrapper[4770]: E1209 15:12:12.713089 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:12:12 crc kubenswrapper[4770]: E1209 15:12:12.713171 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:12:12 crc kubenswrapper[4770]: E1209 15:12:12.713380 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:12:12 crc kubenswrapper[4770]: E1209 15:12:12.714759 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:12:14 crc kubenswrapper[4770]: I1209 15:12:14.243420 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:12:14 crc kubenswrapper[4770]: I1209 15:12:14.243770 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:12:22 crc kubenswrapper[4770]: E1209 15:12:22.592347 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:12:23 crc kubenswrapper[4770]: E1209 15:12:23.590911 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:12:35 crc kubenswrapper[4770]: E1209 15:12:35.717128 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:12:35 crc kubenswrapper[4770]: E1209 15:12:35.717783 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:12:35 crc kubenswrapper[4770]: E1209 15:12:35.717948 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:12:35 crc kubenswrapper[4770]: E1209 15:12:35.719189 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:12:38 crc kubenswrapper[4770]: E1209 15:12:38.625602 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:12:44 crc kubenswrapper[4770]: I1209 15:12:44.244107 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:12:44 crc kubenswrapper[4770]: I1209 15:12:44.244933 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:12:44 crc kubenswrapper[4770]: I1209 15:12:44.245275 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:12:44 crc kubenswrapper[4770]: I1209 15:12:44.246238 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:12:44 crc kubenswrapper[4770]: I1209 15:12:44.246307 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" gracePeriod=600 Dec 09 15:12:44 crc kubenswrapper[4770]: E1209 15:12:44.374086 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:12:45 crc kubenswrapper[4770]: I1209 15:12:45.197226 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" exitCode=0 Dec 09 15:12:45 crc kubenswrapper[4770]: I1209 15:12:45.197271 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739"} Dec 09 15:12:45 crc kubenswrapper[4770]: I1209 15:12:45.197308 4770 scope.go:117] "RemoveContainer" containerID="61d9d30a1148247b9892340a4079ad31de07ff04e2553c2aa4bf985ae5fb277b" Dec 09 15:12:45 crc kubenswrapper[4770]: I1209 15:12:45.198014 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:12:45 crc kubenswrapper[4770]: E1209 15:12:45.198279 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:12:50 crc kubenswrapper[4770]: E1209 15:12:50.590391 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:12:53 crc kubenswrapper[4770]: E1209 15:12:53.590765 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:12:57 crc kubenswrapper[4770]: I1209 15:12:57.589432 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:12:57 crc kubenswrapper[4770]: E1209 15:12:57.589981 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:13:03 crc kubenswrapper[4770]: E1209 15:13:03.591346 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:13:05 crc kubenswrapper[4770]: E1209 15:13:05.589232 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:13:12 crc kubenswrapper[4770]: I1209 15:13:12.590813 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:13:12 crc kubenswrapper[4770]: E1209 15:13:12.591673 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:13:14 crc kubenswrapper[4770]: E1209 15:13:14.592533 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:13:16 crc kubenswrapper[4770]: E1209 15:13:16.590804 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:13:25 crc kubenswrapper[4770]: E1209 15:13:25.591652 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:13:26 crc kubenswrapper[4770]: I1209 15:13:26.588038 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:13:26 crc kubenswrapper[4770]: E1209 15:13:26.588532 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:13:27 crc kubenswrapper[4770]: E1209 15:13:27.590300 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:13:39 crc kubenswrapper[4770]: E1209 15:13:39.591692 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:13:40 crc kubenswrapper[4770]: I1209 15:13:40.587999 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:13:40 crc kubenswrapper[4770]: E1209 15:13:40.588310 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:13:40 crc kubenswrapper[4770]: E1209 15:13:40.590085 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:13:51 crc kubenswrapper[4770]: I1209 15:13:51.589177 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:13:51 crc kubenswrapper[4770]: E1209 15:13:51.590521 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:13:51 crc kubenswrapper[4770]: E1209 15:13:51.591935 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:13:53 crc kubenswrapper[4770]: E1209 15:13:53.592339 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:14:02 crc kubenswrapper[4770]: I1209 15:14:02.589584 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:14:02 crc kubenswrapper[4770]: E1209 15:14:02.590433 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:14:03 crc kubenswrapper[4770]: E1209 15:14:03.591617 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:14:08 crc kubenswrapper[4770]: E1209 15:14:08.608797 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:14:16 crc kubenswrapper[4770]: I1209 15:14:16.589986 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:14:16 crc kubenswrapper[4770]: E1209 15:14:16.591009 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:14:18 crc kubenswrapper[4770]: E1209 15:14:18.596175 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:14:22 crc kubenswrapper[4770]: E1209 15:14:22.592182 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:14:28 crc kubenswrapper[4770]: I1209 15:14:28.595798 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:14:28 crc kubenswrapper[4770]: E1209 15:14:28.596576 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:14:31 crc kubenswrapper[4770]: E1209 15:14:31.591031 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:14:37 crc kubenswrapper[4770]: E1209 15:14:37.590920 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:14:40 crc kubenswrapper[4770]: I1209 15:14:40.588744 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:14:40 crc kubenswrapper[4770]: E1209 15:14:40.589314 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:14:43 crc kubenswrapper[4770]: E1209 15:14:43.591992 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:14:50 crc kubenswrapper[4770]: E1209 15:14:50.590972 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:14:53 crc kubenswrapper[4770]: I1209 15:14:53.589785 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:14:53 crc kubenswrapper[4770]: E1209 15:14:53.591353 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:14:57 crc kubenswrapper[4770]: E1209 15:14:57.590156 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.167111 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg"] Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.169524 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.189556 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.189829 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.195451 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg"] Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.229511 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68e26c41-83c2-4332-be8e-b6581e866db1-config-volume\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.229605 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlwrp\" (UniqueName: \"kubernetes.io/projected/68e26c41-83c2-4332-be8e-b6581e866db1-kube-api-access-nlwrp\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.229636 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68e26c41-83c2-4332-be8e-b6581e866db1-secret-volume\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.331544 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68e26c41-83c2-4332-be8e-b6581e866db1-config-volume\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.331627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlwrp\" (UniqueName: \"kubernetes.io/projected/68e26c41-83c2-4332-be8e-b6581e866db1-kube-api-access-nlwrp\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.331671 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68e26c41-83c2-4332-be8e-b6581e866db1-secret-volume\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.332896 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68e26c41-83c2-4332-be8e-b6581e866db1-config-volume\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.338664 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68e26c41-83c2-4332-be8e-b6581e866db1-secret-volume\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.352273 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlwrp\" (UniqueName: \"kubernetes.io/projected/68e26c41-83c2-4332-be8e-b6581e866db1-kube-api-access-nlwrp\") pod \"collect-profiles-29421555-sdxmg\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:00 crc kubenswrapper[4770]: I1209 15:15:00.521284 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:01 crc kubenswrapper[4770]: I1209 15:15:01.000613 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg"] Dec 09 15:15:01 crc kubenswrapper[4770]: W1209 15:15:01.007827 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68e26c41_83c2_4332_be8e_b6581e866db1.slice/crio-04dce201744f3a0396a00550a3ef37840f13def0704c60c6bfd0eda8a2cac7fd WatchSource:0}: Error finding container 04dce201744f3a0396a00550a3ef37840f13def0704c60c6bfd0eda8a2cac7fd: Status 404 returned error can't find the container with id 04dce201744f3a0396a00550a3ef37840f13def0704c60c6bfd0eda8a2cac7fd Dec 09 15:15:01 crc kubenswrapper[4770]: I1209 15:15:01.743506 4770 generic.go:334] "Generic (PLEG): container finished" podID="68e26c41-83c2-4332-be8e-b6581e866db1" containerID="8f3af003fb91a041055cdea7720fd76629efbec55c22a29054b67c9075dda9ec" exitCode=0 Dec 09 15:15:01 crc kubenswrapper[4770]: I1209 15:15:01.743579 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" event={"ID":"68e26c41-83c2-4332-be8e-b6581e866db1","Type":"ContainerDied","Data":"8f3af003fb91a041055cdea7720fd76629efbec55c22a29054b67c9075dda9ec"} Dec 09 15:15:01 crc kubenswrapper[4770]: I1209 15:15:01.743854 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" event={"ID":"68e26c41-83c2-4332-be8e-b6581e866db1","Type":"ContainerStarted","Data":"04dce201744f3a0396a00550a3ef37840f13def0704c60c6bfd0eda8a2cac7fd"} Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.284012 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.318641 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68e26c41-83c2-4332-be8e-b6581e866db1-config-volume\") pod \"68e26c41-83c2-4332-be8e-b6581e866db1\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.318880 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlwrp\" (UniqueName: \"kubernetes.io/projected/68e26c41-83c2-4332-be8e-b6581e866db1-kube-api-access-nlwrp\") pod \"68e26c41-83c2-4332-be8e-b6581e866db1\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.318984 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68e26c41-83c2-4332-be8e-b6581e866db1-secret-volume\") pod \"68e26c41-83c2-4332-be8e-b6581e866db1\" (UID: \"68e26c41-83c2-4332-be8e-b6581e866db1\") " Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.319422 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e26c41-83c2-4332-be8e-b6581e866db1-config-volume" (OuterVolumeSpecName: "config-volume") pod "68e26c41-83c2-4332-be8e-b6581e866db1" (UID: "68e26c41-83c2-4332-be8e-b6581e866db1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.325053 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e26c41-83c2-4332-be8e-b6581e866db1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "68e26c41-83c2-4332-be8e-b6581e866db1" (UID: "68e26c41-83c2-4332-be8e-b6581e866db1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.325546 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e26c41-83c2-4332-be8e-b6581e866db1-kube-api-access-nlwrp" (OuterVolumeSpecName: "kube-api-access-nlwrp") pod "68e26c41-83c2-4332-be8e-b6581e866db1" (UID: "68e26c41-83c2-4332-be8e-b6581e866db1"). InnerVolumeSpecName "kube-api-access-nlwrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.421374 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlwrp\" (UniqueName: \"kubernetes.io/projected/68e26c41-83c2-4332-be8e-b6581e866db1-kube-api-access-nlwrp\") on node \"crc\" DevicePath \"\"" Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.421682 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68e26c41-83c2-4332-be8e-b6581e866db1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.421693 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68e26c41-83c2-4332-be8e-b6581e866db1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:15:03 crc kubenswrapper[4770]: E1209 15:15:03.591390 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.768447 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" event={"ID":"68e26c41-83c2-4332-be8e-b6581e866db1","Type":"ContainerDied","Data":"04dce201744f3a0396a00550a3ef37840f13def0704c60c6bfd0eda8a2cac7fd"} Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.768521 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04dce201744f3a0396a00550a3ef37840f13def0704c60c6bfd0eda8a2cac7fd" Dec 09 15:15:03 crc kubenswrapper[4770]: I1209 15:15:03.768621 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg" Dec 09 15:15:04 crc kubenswrapper[4770]: I1209 15:15:04.359029 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf"] Dec 09 15:15:04 crc kubenswrapper[4770]: I1209 15:15:04.371069 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-j7vnf"] Dec 09 15:15:04 crc kubenswrapper[4770]: I1209 15:15:04.600813 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34ea339-53b7-4e3d-981b-120b80ad0385" path="/var/lib/kubelet/pods/d34ea339-53b7-4e3d-981b-120b80ad0385/volumes" Dec 09 15:15:08 crc kubenswrapper[4770]: I1209 15:15:08.598748 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:15:08 crc kubenswrapper[4770]: E1209 15:15:08.599461 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:15:10 crc kubenswrapper[4770]: E1209 15:15:10.593050 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:15:14 crc kubenswrapper[4770]: E1209 15:15:14.591136 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:15:22 crc kubenswrapper[4770]: I1209 15:15:22.588461 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:15:22 crc kubenswrapper[4770]: E1209 15:15:22.590601 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:15:23 crc kubenswrapper[4770]: I1209 15:15:23.158505 4770 scope.go:117] "RemoveContainer" containerID="de04db88947917f386bce5d2e716309668203c0dbcc352d67bbeba81dd648411" Dec 09 15:15:23 crc kubenswrapper[4770]: I1209 15:15:23.189277 4770 scope.go:117] "RemoveContainer" containerID="e78af74fafe93c1664ae98ef95f56314d69f3babaf019208076992aec59d2e3a" Dec 09 15:15:23 crc kubenswrapper[4770]: I1209 15:15:23.260133 4770 scope.go:117] "RemoveContainer" containerID="cca6704df730fc89b79f886353e1fb74048a40b644f16d273c7103f2e102d636" Dec 09 15:15:23 crc kubenswrapper[4770]: I1209 15:15:23.299335 4770 scope.go:117] "RemoveContainer" containerID="db6309fbaded4c13f37562373ab87e09c28efe199274d930a5205eedc59e58ff" Dec 09 15:15:25 crc kubenswrapper[4770]: E1209 15:15:25.590145 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:15:26 crc kubenswrapper[4770]: E1209 15:15:26.591231 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:15:35 crc kubenswrapper[4770]: I1209 15:15:35.588632 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:15:35 crc kubenswrapper[4770]: E1209 15:15:35.589314 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:15:39 crc kubenswrapper[4770]: E1209 15:15:39.590506 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:15:39 crc kubenswrapper[4770]: E1209 15:15:39.591177 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:15:46 crc kubenswrapper[4770]: I1209 15:15:46.589082 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:15:46 crc kubenswrapper[4770]: E1209 15:15:46.589836 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:15:50 crc kubenswrapper[4770]: E1209 15:15:50.591068 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:15:52 crc kubenswrapper[4770]: E1209 15:15:52.591490 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:16:00 crc kubenswrapper[4770]: I1209 15:16:00.588155 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:16:00 crc kubenswrapper[4770]: E1209 15:16:00.588895 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:16:03 crc kubenswrapper[4770]: E1209 15:16:03.590633 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:16:05 crc kubenswrapper[4770]: E1209 15:16:05.591244 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:16:06 crc kubenswrapper[4770]: I1209 15:16:06.372428 4770 generic.go:334] "Generic (PLEG): container finished" podID="aa029105-c93b-48e9-8331-76d58f4794d8" containerID="18c69d244e10837b22c9025005ed61cee7359f855b57c1e682086878de428cc8" exitCode=2 Dec 09 15:16:06 crc kubenswrapper[4770]: I1209 15:16:06.372502 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" event={"ID":"aa029105-c93b-48e9-8331-76d58f4794d8","Type":"ContainerDied","Data":"18c69d244e10837b22c9025005ed61cee7359f855b57c1e682086878de428cc8"} Dec 09 15:16:07 crc kubenswrapper[4770]: I1209 15:16:07.906458 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.041787 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7dzw\" (UniqueName: \"kubernetes.io/projected/aa029105-c93b-48e9-8331-76d58f4794d8-kube-api-access-j7dzw\") pod \"aa029105-c93b-48e9-8331-76d58f4794d8\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.042096 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-ssh-key\") pod \"aa029105-c93b-48e9-8331-76d58f4794d8\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.042201 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-inventory\") pod \"aa029105-c93b-48e9-8331-76d58f4794d8\" (UID: \"aa029105-c93b-48e9-8331-76d58f4794d8\") " Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.049161 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa029105-c93b-48e9-8331-76d58f4794d8-kube-api-access-j7dzw" (OuterVolumeSpecName: "kube-api-access-j7dzw") pod "aa029105-c93b-48e9-8331-76d58f4794d8" (UID: "aa029105-c93b-48e9-8331-76d58f4794d8"). InnerVolumeSpecName "kube-api-access-j7dzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.079687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aa029105-c93b-48e9-8331-76d58f4794d8" (UID: "aa029105-c93b-48e9-8331-76d58f4794d8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.080312 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-inventory" (OuterVolumeSpecName: "inventory") pod "aa029105-c93b-48e9-8331-76d58f4794d8" (UID: "aa029105-c93b-48e9-8331-76d58f4794d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.145362 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.145416 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa029105-c93b-48e9-8331-76d58f4794d8-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.145441 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7dzw\" (UniqueName: \"kubernetes.io/projected/aa029105-c93b-48e9-8331-76d58f4794d8-kube-api-access-j7dzw\") on node \"crc\" DevicePath \"\"" Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.395300 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" event={"ID":"aa029105-c93b-48e9-8331-76d58f4794d8","Type":"ContainerDied","Data":"6896f6d96fb696f3b9191a2b3a1d89e753e7dbd991aa141ed655a65ab70eafe6"} Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.395343 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6896f6d96fb696f3b9191a2b3a1d89e753e7dbd991aa141ed655a65ab70eafe6" Dec 09 15:16:08 crc kubenswrapper[4770]: I1209 15:16:08.395345 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz" Dec 09 15:16:15 crc kubenswrapper[4770]: I1209 15:16:15.588662 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:16:15 crc kubenswrapper[4770]: E1209 15:16:15.589668 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:16:16 crc kubenswrapper[4770]: E1209 15:16:16.591954 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:16:18 crc kubenswrapper[4770]: E1209 15:16:18.604938 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:16:27 crc kubenswrapper[4770]: I1209 15:16:27.587859 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:16:27 crc kubenswrapper[4770]: E1209 15:16:27.588692 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:16:29 crc kubenswrapper[4770]: E1209 15:16:29.592055 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:16:31 crc kubenswrapper[4770]: E1209 15:16:31.592272 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:16:42 crc kubenswrapper[4770]: I1209 15:16:42.588384 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:16:42 crc kubenswrapper[4770]: E1209 15:16:42.589247 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:16:43 crc kubenswrapper[4770]: E1209 15:16:43.591434 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:16:44 crc kubenswrapper[4770]: E1209 15:16:44.591773 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.047826 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr"] Dec 09 15:16:45 crc kubenswrapper[4770]: E1209 15:16:45.049637 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa029105-c93b-48e9-8331-76d58f4794d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.049702 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa029105-c93b-48e9-8331-76d58f4794d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:16:45 crc kubenswrapper[4770]: E1209 15:16:45.049759 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e26c41-83c2-4332-be8e-b6581e866db1" containerName="collect-profiles" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.049773 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e26c41-83c2-4332-be8e-b6581e866db1" containerName="collect-profiles" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.050411 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e26c41-83c2-4332-be8e-b6581e866db1" containerName="collect-profiles" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.050451 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa029105-c93b-48e9-8331-76d58f4794d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.051853 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.054448 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.054643 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.054649 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.056743 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.062881 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr"] Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.222586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pd2f\" (UniqueName: \"kubernetes.io/projected/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-kube-api-access-6pd2f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.223154 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.223224 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.325450 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.325817 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pd2f\" (UniqueName: \"kubernetes.io/projected/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-kube-api-access-6pd2f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.326020 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.330833 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.331926 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.344552 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pd2f\" (UniqueName: \"kubernetes.io/projected/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-kube-api-access-6pd2f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.385467 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:16:45 crc kubenswrapper[4770]: I1209 15:16:45.968536 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr"] Dec 09 15:16:46 crc kubenswrapper[4770]: I1209 15:16:46.874397 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" event={"ID":"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d","Type":"ContainerStarted","Data":"a4d559bf48c1969bed694f7ffb856046aa7cfa53c00dc84ccbaa0274f9fff8aa"} Dec 09 15:16:46 crc kubenswrapper[4770]: I1209 15:16:46.874871 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" event={"ID":"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d","Type":"ContainerStarted","Data":"71518faed5d5be4c76ff33118b2472ab5805f6a46434d8c5a6368555e828416c"} Dec 09 15:16:46 crc kubenswrapper[4770]: I1209 15:16:46.903128 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" podStartSLOduration=1.488456689 podStartE2EDuration="1.903087769s" podCreationTimestamp="2025-12-09 15:16:45 +0000 UTC" firstStartedPulling="2025-12-09 15:16:45.970910847 +0000 UTC m=+3237.867112983" lastFinishedPulling="2025-12-09 15:16:46.385541927 +0000 UTC m=+3238.281744063" observedRunningTime="2025-12-09 15:16:46.888087617 +0000 UTC m=+3238.784289843" watchObservedRunningTime="2025-12-09 15:16:46.903087769 +0000 UTC m=+3238.799289895" Dec 09 15:16:53 crc kubenswrapper[4770]: I1209 15:16:53.589570 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:16:53 crc kubenswrapper[4770]: E1209 15:16:53.590359 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:16:54 crc kubenswrapper[4770]: E1209 15:16:54.592528 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:16:56 crc kubenswrapper[4770]: E1209 15:16:56.591448 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:17:07 crc kubenswrapper[4770]: I1209 15:17:07.589059 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:17:07 crc kubenswrapper[4770]: E1209 15:17:07.589906 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:17:07 crc kubenswrapper[4770]: E1209 15:17:07.591546 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:17:07 crc kubenswrapper[4770]: E1209 15:17:07.592274 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:17:18 crc kubenswrapper[4770]: I1209 15:17:18.598772 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:17:18 crc kubenswrapper[4770]: E1209 15:17:18.599789 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:17:19 crc kubenswrapper[4770]: E1209 15:17:19.590409 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:17:22 crc kubenswrapper[4770]: I1209 15:17:22.591208 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:17:22 crc kubenswrapper[4770]: E1209 15:17:22.738871 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:17:22 crc kubenswrapper[4770]: E1209 15:17:22.739481 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:17:22 crc kubenswrapper[4770]: E1209 15:17:22.739720 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:17:22 crc kubenswrapper[4770]: E1209 15:17:22.740947 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:17:32 crc kubenswrapper[4770]: I1209 15:17:32.589560 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:17:32 crc kubenswrapper[4770]: E1209 15:17:32.590404 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:17:33 crc kubenswrapper[4770]: E1209 15:17:33.591182 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:17:35 crc kubenswrapper[4770]: E1209 15:17:35.591941 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:17:45 crc kubenswrapper[4770]: I1209 15:17:45.588471 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:17:46 crc kubenswrapper[4770]: I1209 15:17:46.486252 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"3abf703c48387acb3cf79d87fe9831092cfd9613c3799e87d18353fec13b4a0c"} Dec 09 15:17:47 crc kubenswrapper[4770]: E1209 15:17:47.589925 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:17:48 crc kubenswrapper[4770]: E1209 15:17:48.724294 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:17:48 crc kubenswrapper[4770]: E1209 15:17:48.724849 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:17:48 crc kubenswrapper[4770]: E1209 15:17:48.724983 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:17:48 crc kubenswrapper[4770]: E1209 15:17:48.726329 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:17:59 crc kubenswrapper[4770]: E1209 15:17:59.591094 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:18:03 crc kubenswrapper[4770]: E1209 15:18:03.590075 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:18:12 crc kubenswrapper[4770]: E1209 15:18:12.590646 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:18:18 crc kubenswrapper[4770]: E1209 15:18:18.600367 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:18:25 crc kubenswrapper[4770]: E1209 15:18:25.590640 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:18:30 crc kubenswrapper[4770]: E1209 15:18:30.590136 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:18:36 crc kubenswrapper[4770]: E1209 15:18:36.592220 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:18:43 crc kubenswrapper[4770]: E1209 15:18:43.591032 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:18:44 crc kubenswrapper[4770]: I1209 15:18:44.995467 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8czqf"] Dec 09 15:18:44 crc kubenswrapper[4770]: I1209 15:18:44.998383 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.006218 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8czqf"] Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.058097 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-utilities\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.058193 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-catalog-content\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.058236 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmxq2\" (UniqueName: \"kubernetes.io/projected/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-kube-api-access-nmxq2\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.160649 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-catalog-content\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.161058 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmxq2\" (UniqueName: \"kubernetes.io/projected/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-kube-api-access-nmxq2\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.161133 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-catalog-content\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.161647 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-utilities\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.161947 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-utilities\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.182890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmxq2\" (UniqueName: \"kubernetes.io/projected/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-kube-api-access-nmxq2\") pod \"redhat-marketplace-8czqf\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:45 crc kubenswrapper[4770]: I1209 15:18:45.328223 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:46 crc kubenswrapper[4770]: I1209 15:18:46.097463 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8czqf"] Dec 09 15:18:46 crc kubenswrapper[4770]: W1209 15:18:46.104053 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f1d3ac3_19d1_4993_a28d_e4d1ea498a69.slice/crio-7fb8cb64958b21167d54771f15dadad269aa265b04a2b0b5c8e68074354cc0af WatchSource:0}: Error finding container 7fb8cb64958b21167d54771f15dadad269aa265b04a2b0b5c8e68074354cc0af: Status 404 returned error can't find the container with id 7fb8cb64958b21167d54771f15dadad269aa265b04a2b0b5c8e68074354cc0af Dec 09 15:18:46 crc kubenswrapper[4770]: I1209 15:18:46.119900 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8czqf" event={"ID":"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69","Type":"ContainerStarted","Data":"7fb8cb64958b21167d54771f15dadad269aa265b04a2b0b5c8e68074354cc0af"} Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.131335 4770 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerID="3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6" exitCode=0 Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.131457 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8czqf" event={"ID":"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69","Type":"ContainerDied","Data":"3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6"} Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.210311 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jmnmg"] Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.225374 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmnmg"] Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.225547 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.305906 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zf2x\" (UniqueName: \"kubernetes.io/projected/8787b8d7-529c-4b7f-92d3-b6bd82570332-kube-api-access-6zf2x\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.306108 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-utilities\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.306157 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-catalog-content\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.397847 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6bpr7"] Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.400788 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.408019 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bpr7"] Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.408468 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-utilities\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.408542 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-catalog-content\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.408592 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zf2x\" (UniqueName: \"kubernetes.io/projected/8787b8d7-529c-4b7f-92d3-b6bd82570332-kube-api-access-6zf2x\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.409069 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-utilities\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.409093 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-catalog-content\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.452015 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zf2x\" (UniqueName: \"kubernetes.io/projected/8787b8d7-529c-4b7f-92d3-b6bd82570332-kube-api-access-6zf2x\") pod \"community-operators-jmnmg\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.510352 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-catalog-content\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.510434 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wvg5\" (UniqueName: \"kubernetes.io/projected/024dc397-a3eb-4c42-a05f-7fd47619ffd0-kube-api-access-7wvg5\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.510507 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-utilities\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.549138 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.612652 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-catalog-content\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.613038 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wvg5\" (UniqueName: \"kubernetes.io/projected/024dc397-a3eb-4c42-a05f-7fd47619ffd0-kube-api-access-7wvg5\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.613128 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-utilities\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.614589 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-catalog-content\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.614914 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-utilities\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.632816 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wvg5\" (UniqueName: \"kubernetes.io/projected/024dc397-a3eb-4c42-a05f-7fd47619ffd0-kube-api-access-7wvg5\") pod \"certified-operators-6bpr7\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:47 crc kubenswrapper[4770]: I1209 15:18:47.725231 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:48 crc kubenswrapper[4770]: I1209 15:18:48.145541 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8czqf" event={"ID":"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69","Type":"ContainerStarted","Data":"1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00"} Dec 09 15:18:48 crc kubenswrapper[4770]: I1209 15:18:48.160296 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmnmg"] Dec 09 15:18:48 crc kubenswrapper[4770]: I1209 15:18:48.337179 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bpr7"] Dec 09 15:18:48 crc kubenswrapper[4770]: W1209 15:18:48.423539 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod024dc397_a3eb_4c42_a05f_7fd47619ffd0.slice/crio-aa555e2c883608f45705a4f69c7e9f02ec32b0da8c12a8a60c787cfc15ca3e4c WatchSource:0}: Error finding container aa555e2c883608f45705a4f69c7e9f02ec32b0da8c12a8a60c787cfc15ca3e4c: Status 404 returned error can't find the container with id aa555e2c883608f45705a4f69c7e9f02ec32b0da8c12a8a60c787cfc15ca3e4c Dec 09 15:18:49 crc kubenswrapper[4770]: I1209 15:18:49.159409 4770 generic.go:334] "Generic (PLEG): container finished" podID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerID="c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea" exitCode=0 Dec 09 15:18:49 crc kubenswrapper[4770]: I1209 15:18:49.159516 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmnmg" event={"ID":"8787b8d7-529c-4b7f-92d3-b6bd82570332","Type":"ContainerDied","Data":"c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea"} Dec 09 15:18:49 crc kubenswrapper[4770]: I1209 15:18:49.160035 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmnmg" event={"ID":"8787b8d7-529c-4b7f-92d3-b6bd82570332","Type":"ContainerStarted","Data":"3f857bdf165b80ed0427af50fa3162292cb4441f718e81e139c199c183fcd449"} Dec 09 15:18:49 crc kubenswrapper[4770]: I1209 15:18:49.164482 4770 generic.go:334] "Generic (PLEG): container finished" podID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerID="5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432" exitCode=0 Dec 09 15:18:49 crc kubenswrapper[4770]: I1209 15:18:49.164550 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpr7" event={"ID":"024dc397-a3eb-4c42-a05f-7fd47619ffd0","Type":"ContainerDied","Data":"5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432"} Dec 09 15:18:49 crc kubenswrapper[4770]: I1209 15:18:49.164586 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpr7" event={"ID":"024dc397-a3eb-4c42-a05f-7fd47619ffd0","Type":"ContainerStarted","Data":"aa555e2c883608f45705a4f69c7e9f02ec32b0da8c12a8a60c787cfc15ca3e4c"} Dec 09 15:18:49 crc kubenswrapper[4770]: I1209 15:18:49.170306 4770 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerID="1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00" exitCode=0 Dec 09 15:18:49 crc kubenswrapper[4770]: I1209 15:18:49.170353 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8czqf" event={"ID":"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69","Type":"ContainerDied","Data":"1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00"} Dec 09 15:18:50 crc kubenswrapper[4770]: I1209 15:18:50.186334 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpr7" event={"ID":"024dc397-a3eb-4c42-a05f-7fd47619ffd0","Type":"ContainerStarted","Data":"20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4"} Dec 09 15:18:50 crc kubenswrapper[4770]: I1209 15:18:50.189835 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8czqf" event={"ID":"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69","Type":"ContainerStarted","Data":"68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af"} Dec 09 15:18:50 crc kubenswrapper[4770]: E1209 15:18:50.590775 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:18:51 crc kubenswrapper[4770]: I1209 15:18:51.200651 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmnmg" event={"ID":"8787b8d7-529c-4b7f-92d3-b6bd82570332","Type":"ContainerStarted","Data":"8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab"} Dec 09 15:18:51 crc kubenswrapper[4770]: I1209 15:18:51.202787 4770 generic.go:334] "Generic (PLEG): container finished" podID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerID="20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4" exitCode=0 Dec 09 15:18:51 crc kubenswrapper[4770]: I1209 15:18:51.202834 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpr7" event={"ID":"024dc397-a3eb-4c42-a05f-7fd47619ffd0","Type":"ContainerDied","Data":"20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4"} Dec 09 15:18:51 crc kubenswrapper[4770]: I1209 15:18:51.222505 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8czqf" podStartSLOduration=4.789587054 podStartE2EDuration="7.222486648s" podCreationTimestamp="2025-12-09 15:18:44 +0000 UTC" firstStartedPulling="2025-12-09 15:18:47.134381867 +0000 UTC m=+3359.030584003" lastFinishedPulling="2025-12-09 15:18:49.567281451 +0000 UTC m=+3361.463483597" observedRunningTime="2025-12-09 15:18:50.239158724 +0000 UTC m=+3362.135360870" watchObservedRunningTime="2025-12-09 15:18:51.222486648 +0000 UTC m=+3363.118688784" Dec 09 15:18:52 crc kubenswrapper[4770]: I1209 15:18:52.214673 4770 generic.go:334] "Generic (PLEG): container finished" podID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerID="8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab" exitCode=0 Dec 09 15:18:52 crc kubenswrapper[4770]: I1209 15:18:52.214751 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmnmg" event={"ID":"8787b8d7-529c-4b7f-92d3-b6bd82570332","Type":"ContainerDied","Data":"8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab"} Dec 09 15:18:53 crc kubenswrapper[4770]: I1209 15:18:53.226866 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpr7" event={"ID":"024dc397-a3eb-4c42-a05f-7fd47619ffd0","Type":"ContainerStarted","Data":"1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e"} Dec 09 15:18:53 crc kubenswrapper[4770]: I1209 15:18:53.231038 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmnmg" event={"ID":"8787b8d7-529c-4b7f-92d3-b6bd82570332","Type":"ContainerStarted","Data":"4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727"} Dec 09 15:18:53 crc kubenswrapper[4770]: I1209 15:18:53.255302 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6bpr7" podStartSLOduration=3.162551873 podStartE2EDuration="6.25527439s" podCreationTimestamp="2025-12-09 15:18:47 +0000 UTC" firstStartedPulling="2025-12-09 15:18:49.167181059 +0000 UTC m=+3361.063383215" lastFinishedPulling="2025-12-09 15:18:52.259903596 +0000 UTC m=+3364.156105732" observedRunningTime="2025-12-09 15:18:53.247427785 +0000 UTC m=+3365.143629911" watchObservedRunningTime="2025-12-09 15:18:53.25527439 +0000 UTC m=+3365.151476536" Dec 09 15:18:53 crc kubenswrapper[4770]: I1209 15:18:53.276664 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jmnmg" podStartSLOduration=2.8348096160000003 podStartE2EDuration="6.276637876s" podCreationTimestamp="2025-12-09 15:18:47 +0000 UTC" firstStartedPulling="2025-12-09 15:18:49.161962316 +0000 UTC m=+3361.058164452" lastFinishedPulling="2025-12-09 15:18:52.603790576 +0000 UTC m=+3364.499992712" observedRunningTime="2025-12-09 15:18:53.267259148 +0000 UTC m=+3365.163461284" watchObservedRunningTime="2025-12-09 15:18:53.276637876 +0000 UTC m=+3365.172840012" Dec 09 15:18:55 crc kubenswrapper[4770]: I1209 15:18:55.328489 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:55 crc kubenswrapper[4770]: I1209 15:18:55.328771 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:55 crc kubenswrapper[4770]: I1209 15:18:55.381745 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:55 crc kubenswrapper[4770]: E1209 15:18:55.591217 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:18:56 crc kubenswrapper[4770]: I1209 15:18:56.343713 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:57 crc kubenswrapper[4770]: I1209 15:18:57.550043 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:57 crc kubenswrapper[4770]: I1209 15:18:57.550398 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:57 crc kubenswrapper[4770]: I1209 15:18:57.606082 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:57 crc kubenswrapper[4770]: I1209 15:18:57.725978 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:57 crc kubenswrapper[4770]: I1209 15:18:57.726053 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:57 crc kubenswrapper[4770]: I1209 15:18:57.776741 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:58 crc kubenswrapper[4770]: I1209 15:18:58.190461 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8czqf"] Dec 09 15:18:58 crc kubenswrapper[4770]: I1209 15:18:58.320681 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8czqf" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerName="registry-server" containerID="cri-o://68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af" gracePeriod=2 Dec 09 15:18:58 crc kubenswrapper[4770]: I1209 15:18:58.413860 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:18:58 crc kubenswrapper[4770]: I1209 15:18:58.462489 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.129103 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.232986 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-utilities\") pod \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.233069 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmxq2\" (UniqueName: \"kubernetes.io/projected/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-kube-api-access-nmxq2\") pod \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.233156 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-catalog-content\") pod \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\" (UID: \"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69\") " Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.233550 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-utilities" (OuterVolumeSpecName: "utilities") pod "5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" (UID: "5f1d3ac3-19d1-4993-a28d-e4d1ea498a69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.234270 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.239604 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-kube-api-access-nmxq2" (OuterVolumeSpecName: "kube-api-access-nmxq2") pod "5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" (UID: "5f1d3ac3-19d1-4993-a28d-e4d1ea498a69"). InnerVolumeSpecName "kube-api-access-nmxq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.259267 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" (UID: "5f1d3ac3-19d1-4993-a28d-e4d1ea498a69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.336621 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmxq2\" (UniqueName: \"kubernetes.io/projected/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-kube-api-access-nmxq2\") on node \"crc\" DevicePath \"\"" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.337119 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.339402 4770 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerID="68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af" exitCode=0 Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.339539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8czqf" event={"ID":"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69","Type":"ContainerDied","Data":"68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af"} Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.339620 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8czqf" event={"ID":"5f1d3ac3-19d1-4993-a28d-e4d1ea498a69","Type":"ContainerDied","Data":"7fb8cb64958b21167d54771f15dadad269aa265b04a2b0b5c8e68074354cc0af"} Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.339866 4770 scope.go:117] "RemoveContainer" containerID="68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.340147 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8czqf" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.390561 4770 scope.go:117] "RemoveContainer" containerID="1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.405313 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8czqf"] Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.414928 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8czqf"] Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.415974 4770 scope.go:117] "RemoveContainer" containerID="3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.469186 4770 scope.go:117] "RemoveContainer" containerID="68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af" Dec 09 15:18:59 crc kubenswrapper[4770]: E1209 15:18:59.469596 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af\": container with ID starting with 68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af not found: ID does not exist" containerID="68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.469635 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af"} err="failed to get container status \"68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af\": rpc error: code = NotFound desc = could not find container \"68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af\": container with ID starting with 68a049f8b50476216389199d103f09a2a788fc9eb987361988921f6ad1b424af not found: ID does not exist" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.469666 4770 scope.go:117] "RemoveContainer" containerID="1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00" Dec 09 15:18:59 crc kubenswrapper[4770]: E1209 15:18:59.469912 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00\": container with ID starting with 1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00 not found: ID does not exist" containerID="1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.469931 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00"} err="failed to get container status \"1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00\": rpc error: code = NotFound desc = could not find container \"1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00\": container with ID starting with 1749dfdbce86de624f7f55f998a695d4581f74d4ce9a92ef3ce0fdda22c83c00 not found: ID does not exist" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.469947 4770 scope.go:117] "RemoveContainer" containerID="3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6" Dec 09 15:18:59 crc kubenswrapper[4770]: E1209 15:18:59.471213 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6\": container with ID starting with 3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6 not found: ID does not exist" containerID="3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6" Dec 09 15:18:59 crc kubenswrapper[4770]: I1209 15:18:59.471242 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6"} err="failed to get container status \"3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6\": rpc error: code = NotFound desc = could not find container \"3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6\": container with ID starting with 3c0ce1982f73708254ba6b03bc4d3fd2eb90f87b0e8820802ba24002a5d561b6 not found: ID does not exist" Dec 09 15:19:00 crc kubenswrapper[4770]: I1209 15:19:00.000569 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmnmg"] Dec 09 15:19:00 crc kubenswrapper[4770]: I1209 15:19:00.350062 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jmnmg" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerName="registry-server" containerID="cri-o://4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727" gracePeriod=2 Dec 09 15:19:00 crc kubenswrapper[4770]: I1209 15:19:00.617762 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" path="/var/lib/kubelet/pods/5f1d3ac3-19d1-4993-a28d-e4d1ea498a69/volumes" Dec 09 15:19:00 crc kubenswrapper[4770]: I1209 15:19:00.619990 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bpr7"] Dec 09 15:19:00 crc kubenswrapper[4770]: I1209 15:19:00.620355 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6bpr7" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerName="registry-server" containerID="cri-o://1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e" gracePeriod=2 Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.007581 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.091693 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zf2x\" (UniqueName: \"kubernetes.io/projected/8787b8d7-529c-4b7f-92d3-b6bd82570332-kube-api-access-6zf2x\") pod \"8787b8d7-529c-4b7f-92d3-b6bd82570332\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.091786 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-utilities\") pod \"8787b8d7-529c-4b7f-92d3-b6bd82570332\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.091867 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-catalog-content\") pod \"8787b8d7-529c-4b7f-92d3-b6bd82570332\" (UID: \"8787b8d7-529c-4b7f-92d3-b6bd82570332\") " Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.093180 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-utilities" (OuterVolumeSpecName: "utilities") pod "8787b8d7-529c-4b7f-92d3-b6bd82570332" (UID: "8787b8d7-529c-4b7f-92d3-b6bd82570332"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.097677 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8787b8d7-529c-4b7f-92d3-b6bd82570332-kube-api-access-6zf2x" (OuterVolumeSpecName: "kube-api-access-6zf2x") pod "8787b8d7-529c-4b7f-92d3-b6bd82570332" (UID: "8787b8d7-529c-4b7f-92d3-b6bd82570332"). InnerVolumeSpecName "kube-api-access-6zf2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.155690 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8787b8d7-529c-4b7f-92d3-b6bd82570332" (UID: "8787b8d7-529c-4b7f-92d3-b6bd82570332"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.158580 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.202651 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zf2x\" (UniqueName: \"kubernetes.io/projected/8787b8d7-529c-4b7f-92d3-b6bd82570332-kube-api-access-6zf2x\") on node \"crc\" DevicePath \"\"" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.202684 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.202694 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8787b8d7-529c-4b7f-92d3-b6bd82570332-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.304755 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-utilities\") pod \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.304930 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wvg5\" (UniqueName: \"kubernetes.io/projected/024dc397-a3eb-4c42-a05f-7fd47619ffd0-kube-api-access-7wvg5\") pod \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.305138 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-catalog-content\") pod \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\" (UID: \"024dc397-a3eb-4c42-a05f-7fd47619ffd0\") " Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.305355 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-utilities" (OuterVolumeSpecName: "utilities") pod "024dc397-a3eb-4c42-a05f-7fd47619ffd0" (UID: "024dc397-a3eb-4c42-a05f-7fd47619ffd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.305828 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.310446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/024dc397-a3eb-4c42-a05f-7fd47619ffd0-kube-api-access-7wvg5" (OuterVolumeSpecName: "kube-api-access-7wvg5") pod "024dc397-a3eb-4c42-a05f-7fd47619ffd0" (UID: "024dc397-a3eb-4c42-a05f-7fd47619ffd0"). InnerVolumeSpecName "kube-api-access-7wvg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.360051 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "024dc397-a3eb-4c42-a05f-7fd47619ffd0" (UID: "024dc397-a3eb-4c42-a05f-7fd47619ffd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.361482 4770 generic.go:334] "Generic (PLEG): container finished" podID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerID="4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727" exitCode=0 Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.361553 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmnmg" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.361561 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmnmg" event={"ID":"8787b8d7-529c-4b7f-92d3-b6bd82570332","Type":"ContainerDied","Data":"4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727"} Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.361680 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmnmg" event={"ID":"8787b8d7-529c-4b7f-92d3-b6bd82570332","Type":"ContainerDied","Data":"3f857bdf165b80ed0427af50fa3162292cb4441f718e81e139c199c183fcd449"} Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.361698 4770 scope.go:117] "RemoveContainer" containerID="4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.365009 4770 generic.go:334] "Generic (PLEG): container finished" podID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerID="1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e" exitCode=0 Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.365039 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpr7" event={"ID":"024dc397-a3eb-4c42-a05f-7fd47619ffd0","Type":"ContainerDied","Data":"1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e"} Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.365057 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpr7" event={"ID":"024dc397-a3eb-4c42-a05f-7fd47619ffd0","Type":"ContainerDied","Data":"aa555e2c883608f45705a4f69c7e9f02ec32b0da8c12a8a60c787cfc15ca3e4c"} Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.365093 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bpr7" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.384224 4770 scope.go:117] "RemoveContainer" containerID="8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.407831 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wvg5\" (UniqueName: \"kubernetes.io/projected/024dc397-a3eb-4c42-a05f-7fd47619ffd0-kube-api-access-7wvg5\") on node \"crc\" DevicePath \"\"" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.408151 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/024dc397-a3eb-4c42-a05f-7fd47619ffd0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.419580 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmnmg"] Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.420935 4770 scope.go:117] "RemoveContainer" containerID="c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.430470 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jmnmg"] Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.444496 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bpr7"] Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.451718 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6bpr7"] Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.452998 4770 scope.go:117] "RemoveContainer" containerID="4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727" Dec 09 15:19:01 crc kubenswrapper[4770]: E1209 15:19:01.453350 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727\": container with ID starting with 4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727 not found: ID does not exist" containerID="4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.453386 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727"} err="failed to get container status \"4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727\": rpc error: code = NotFound desc = could not find container \"4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727\": container with ID starting with 4d4ecaf1dd4e512c596024c4aa02e3adcc06dec469b844e40458c355ef9c9727 not found: ID does not exist" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.453422 4770 scope.go:117] "RemoveContainer" containerID="8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab" Dec 09 15:19:01 crc kubenswrapper[4770]: E1209 15:19:01.453715 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab\": container with ID starting with 8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab not found: ID does not exist" containerID="8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.453822 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab"} err="failed to get container status \"8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab\": rpc error: code = NotFound desc = could not find container \"8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab\": container with ID starting with 8668ab01e5fad3997d68ab4e0b907cc2b3cb718f45ee59c06f97aae7a33713ab not found: ID does not exist" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.453839 4770 scope.go:117] "RemoveContainer" containerID="c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea" Dec 09 15:19:01 crc kubenswrapper[4770]: E1209 15:19:01.454089 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea\": container with ID starting with c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea not found: ID does not exist" containerID="c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.454109 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea"} err="failed to get container status \"c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea\": rpc error: code = NotFound desc = could not find container \"c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea\": container with ID starting with c3afa0ec014423503ac681332692406ec235e56edbcb147f4c858ce7fd9a0dea not found: ID does not exist" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.454122 4770 scope.go:117] "RemoveContainer" containerID="1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.522189 4770 scope.go:117] "RemoveContainer" containerID="20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.546316 4770 scope.go:117] "RemoveContainer" containerID="5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.622541 4770 scope.go:117] "RemoveContainer" containerID="1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e" Dec 09 15:19:01 crc kubenswrapper[4770]: E1209 15:19:01.623095 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e\": container with ID starting with 1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e not found: ID does not exist" containerID="1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.623127 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e"} err="failed to get container status \"1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e\": rpc error: code = NotFound desc = could not find container \"1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e\": container with ID starting with 1b8ab90ef57144c311855d1cbb4508445c3d851d10963a0ce638cf7e25bae56e not found: ID does not exist" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.623152 4770 scope.go:117] "RemoveContainer" containerID="20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4" Dec 09 15:19:01 crc kubenswrapper[4770]: E1209 15:19:01.623465 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4\": container with ID starting with 20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4 not found: ID does not exist" containerID="20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.623488 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4"} err="failed to get container status \"20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4\": rpc error: code = NotFound desc = could not find container \"20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4\": container with ID starting with 20f5388e886fb8dc4b3a287eb899e3480a02a245c696f68db3fd5f7da14e97b4 not found: ID does not exist" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.623502 4770 scope.go:117] "RemoveContainer" containerID="5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432" Dec 09 15:19:01 crc kubenswrapper[4770]: E1209 15:19:01.623852 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432\": container with ID starting with 5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432 not found: ID does not exist" containerID="5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432" Dec 09 15:19:01 crc kubenswrapper[4770]: I1209 15:19:01.623882 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432"} err="failed to get container status \"5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432\": rpc error: code = NotFound desc = could not find container \"5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432\": container with ID starting with 5c16535a3266266cb2fe2bae36f884d2aacd8e40833fe4c4e3bf3df8dc3e6432 not found: ID does not exist" Dec 09 15:19:02 crc kubenswrapper[4770]: I1209 15:19:02.603714 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" path="/var/lib/kubelet/pods/024dc397-a3eb-4c42-a05f-7fd47619ffd0/volumes" Dec 09 15:19:02 crc kubenswrapper[4770]: I1209 15:19:02.604773 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" path="/var/lib/kubelet/pods/8787b8d7-529c-4b7f-92d3-b6bd82570332/volumes" Dec 09 15:19:04 crc kubenswrapper[4770]: E1209 15:19:04.590942 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:19:10 crc kubenswrapper[4770]: E1209 15:19:10.591621 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:19:18 crc kubenswrapper[4770]: E1209 15:19:18.602938 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:19:24 crc kubenswrapper[4770]: E1209 15:19:24.589814 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:19:32 crc kubenswrapper[4770]: E1209 15:19:32.592133 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:19:35 crc kubenswrapper[4770]: E1209 15:19:35.590040 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:19:45 crc kubenswrapper[4770]: E1209 15:19:45.589920 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:19:47 crc kubenswrapper[4770]: E1209 15:19:47.590270 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.183662 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-954f9"] Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.184835 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerName="extract-utilities" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.184862 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerName="extract-utilities" Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.184879 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerName="extract-content" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.184885 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerName="extract-content" Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.184897 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.184906 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.184917 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerName="extract-content" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.184923 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerName="extract-content" Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.184947 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerName="extract-utilities" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.184953 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerName="extract-utilities" Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.184974 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerName="extract-content" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.184980 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerName="extract-content" Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.184993 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.184998 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.185011 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerName="extract-utilities" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.185017 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerName="extract-utilities" Dec 09 15:19:55 crc kubenswrapper[4770]: E1209 15:19:55.185029 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.185035 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.185240 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="024dc397-a3eb-4c42-a05f-7fd47619ffd0" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.185263 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3ac3-19d1-4993-a28d-e4d1ea498a69" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.185273 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8787b8d7-529c-4b7f-92d3-b6bd82570332" containerName="registry-server" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.187753 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.210020 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-954f9"] Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.231523 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-utilities\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.231574 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57js\" (UniqueName: \"kubernetes.io/projected/65c56a16-6650-4c5b-948b-c76f9f0d0076-kube-api-access-q57js\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.231772 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-catalog-content\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.333555 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-catalog-content\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.333677 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-utilities\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.333703 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57js\" (UniqueName: \"kubernetes.io/projected/65c56a16-6650-4c5b-948b-c76f9f0d0076-kube-api-access-q57js\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.334218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-catalog-content\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.334500 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-utilities\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.353140 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57js\" (UniqueName: \"kubernetes.io/projected/65c56a16-6650-4c5b-948b-c76f9f0d0076-kube-api-access-q57js\") pod \"redhat-operators-954f9\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:55 crc kubenswrapper[4770]: I1209 15:19:55.531883 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:19:56 crc kubenswrapper[4770]: I1209 15:19:56.008346 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-954f9"] Dec 09 15:19:56 crc kubenswrapper[4770]: E1209 15:19:56.590216 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:19:57 crc kubenswrapper[4770]: I1209 15:19:57.001821 4770 generic.go:334] "Generic (PLEG): container finished" podID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerID="e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e" exitCode=0 Dec 09 15:19:57 crc kubenswrapper[4770]: I1209 15:19:57.001867 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-954f9" event={"ID":"65c56a16-6650-4c5b-948b-c76f9f0d0076","Type":"ContainerDied","Data":"e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e"} Dec 09 15:19:57 crc kubenswrapper[4770]: I1209 15:19:57.001893 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-954f9" event={"ID":"65c56a16-6650-4c5b-948b-c76f9f0d0076","Type":"ContainerStarted","Data":"5ec29634154386d7fba444f82a4db843c72458035ec23bb40734344bc8574cf3"} Dec 09 15:19:58 crc kubenswrapper[4770]: I1209 15:19:58.014586 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-954f9" event={"ID":"65c56a16-6650-4c5b-948b-c76f9f0d0076","Type":"ContainerStarted","Data":"0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34"} Dec 09 15:19:59 crc kubenswrapper[4770]: I1209 15:19:59.028829 4770 generic.go:334] "Generic (PLEG): container finished" podID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerID="0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34" exitCode=0 Dec 09 15:19:59 crc kubenswrapper[4770]: I1209 15:19:59.028886 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-954f9" event={"ID":"65c56a16-6650-4c5b-948b-c76f9f0d0076","Type":"ContainerDied","Data":"0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34"} Dec 09 15:20:00 crc kubenswrapper[4770]: I1209 15:20:00.042120 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-954f9" event={"ID":"65c56a16-6650-4c5b-948b-c76f9f0d0076","Type":"ContainerStarted","Data":"af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1"} Dec 09 15:20:00 crc kubenswrapper[4770]: I1209 15:20:00.066369 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-954f9" podStartSLOduration=2.505552531 podStartE2EDuration="5.066328721s" podCreationTimestamp="2025-12-09 15:19:55 +0000 UTC" firstStartedPulling="2025-12-09 15:19:57.004931894 +0000 UTC m=+3428.901134030" lastFinishedPulling="2025-12-09 15:19:59.565708084 +0000 UTC m=+3431.461910220" observedRunningTime="2025-12-09 15:20:00.061399067 +0000 UTC m=+3431.957601213" watchObservedRunningTime="2025-12-09 15:20:00.066328721 +0000 UTC m=+3431.962530857" Dec 09 15:20:00 crc kubenswrapper[4770]: E1209 15:20:00.590593 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:20:05 crc kubenswrapper[4770]: I1209 15:20:05.532502 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:20:05 crc kubenswrapper[4770]: I1209 15:20:05.533131 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:20:05 crc kubenswrapper[4770]: I1209 15:20:05.593984 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:20:06 crc kubenswrapper[4770]: I1209 15:20:06.165972 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:20:06 crc kubenswrapper[4770]: I1209 15:20:06.955061 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-954f9"] Dec 09 15:20:07 crc kubenswrapper[4770]: E1209 15:20:07.592599 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:20:08 crc kubenswrapper[4770]: I1209 15:20:08.128765 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-954f9" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerName="registry-server" containerID="cri-o://af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1" gracePeriod=2 Dec 09 15:20:08 crc kubenswrapper[4770]: I1209 15:20:08.801346 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:20:08 crc kubenswrapper[4770]: I1209 15:20:08.983607 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q57js\" (UniqueName: \"kubernetes.io/projected/65c56a16-6650-4c5b-948b-c76f9f0d0076-kube-api-access-q57js\") pod \"65c56a16-6650-4c5b-948b-c76f9f0d0076\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " Dec 09 15:20:08 crc kubenswrapper[4770]: I1209 15:20:08.983652 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-utilities\") pod \"65c56a16-6650-4c5b-948b-c76f9f0d0076\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " Dec 09 15:20:08 crc kubenswrapper[4770]: I1209 15:20:08.983702 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-catalog-content\") pod \"65c56a16-6650-4c5b-948b-c76f9f0d0076\" (UID: \"65c56a16-6650-4c5b-948b-c76f9f0d0076\") " Dec 09 15:20:08 crc kubenswrapper[4770]: I1209 15:20:08.984624 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-utilities" (OuterVolumeSpecName: "utilities") pod "65c56a16-6650-4c5b-948b-c76f9f0d0076" (UID: "65c56a16-6650-4c5b-948b-c76f9f0d0076"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:20:08 crc kubenswrapper[4770]: I1209 15:20:08.990010 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c56a16-6650-4c5b-948b-c76f9f0d0076-kube-api-access-q57js" (OuterVolumeSpecName: "kube-api-access-q57js") pod "65c56a16-6650-4c5b-948b-c76f9f0d0076" (UID: "65c56a16-6650-4c5b-948b-c76f9f0d0076"). InnerVolumeSpecName "kube-api-access-q57js". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.090525 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q57js\" (UniqueName: \"kubernetes.io/projected/65c56a16-6650-4c5b-948b-c76f9f0d0076-kube-api-access-q57js\") on node \"crc\" DevicePath \"\"" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.090563 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.105785 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65c56a16-6650-4c5b-948b-c76f9f0d0076" (UID: "65c56a16-6650-4c5b-948b-c76f9f0d0076"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.140978 4770 generic.go:334] "Generic (PLEG): container finished" podID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerID="af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1" exitCode=0 Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.141014 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-954f9" event={"ID":"65c56a16-6650-4c5b-948b-c76f9f0d0076","Type":"ContainerDied","Data":"af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1"} Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.141050 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-954f9" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.141105 4770 scope.go:117] "RemoveContainer" containerID="af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.141060 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-954f9" event={"ID":"65c56a16-6650-4c5b-948b-c76f9f0d0076","Type":"ContainerDied","Data":"5ec29634154386d7fba444f82a4db843c72458035ec23bb40734344bc8574cf3"} Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.163345 4770 scope.go:117] "RemoveContainer" containerID="0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.178032 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-954f9"] Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.189561 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-954f9"] Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.192438 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c56a16-6650-4c5b-948b-c76f9f0d0076-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.204379 4770 scope.go:117] "RemoveContainer" containerID="e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.272140 4770 scope.go:117] "RemoveContainer" containerID="af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1" Dec 09 15:20:09 crc kubenswrapper[4770]: E1209 15:20:09.272625 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1\": container with ID starting with af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1 not found: ID does not exist" containerID="af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.272670 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1"} err="failed to get container status \"af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1\": rpc error: code = NotFound desc = could not find container \"af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1\": container with ID starting with af405ebba7147c124eebe36693b773006b4f4ddf82b702bbe0c1f2c516dc39f1 not found: ID does not exist" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.272699 4770 scope.go:117] "RemoveContainer" containerID="0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34" Dec 09 15:20:09 crc kubenswrapper[4770]: E1209 15:20:09.273047 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34\": container with ID starting with 0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34 not found: ID does not exist" containerID="0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.273086 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34"} err="failed to get container status \"0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34\": rpc error: code = NotFound desc = could not find container \"0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34\": container with ID starting with 0673a2864707484239972aa59f5f297f9b2764b4f30e2b32ba505575a9707a34 not found: ID does not exist" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.273107 4770 scope.go:117] "RemoveContainer" containerID="e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e" Dec 09 15:20:09 crc kubenswrapper[4770]: E1209 15:20:09.273424 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e\": container with ID starting with e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e not found: ID does not exist" containerID="e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e" Dec 09 15:20:09 crc kubenswrapper[4770]: I1209 15:20:09.273503 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e"} err="failed to get container status \"e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e\": rpc error: code = NotFound desc = could not find container \"e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e\": container with ID starting with e2dca9ed570722e4634c13a87a8ae0b928f97ba7cd8a3fc01c476adb917d1a1e not found: ID does not exist" Dec 09 15:20:10 crc kubenswrapper[4770]: I1209 15:20:10.601391 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" path="/var/lib/kubelet/pods/65c56a16-6650-4c5b-948b-c76f9f0d0076/volumes" Dec 09 15:20:14 crc kubenswrapper[4770]: I1209 15:20:14.243205 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:20:14 crc kubenswrapper[4770]: I1209 15:20:14.243737 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:20:14 crc kubenswrapper[4770]: E1209 15:20:14.592931 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:20:22 crc kubenswrapper[4770]: E1209 15:20:22.590919 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:20:28 crc kubenswrapper[4770]: E1209 15:20:28.599272 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:20:33 crc kubenswrapper[4770]: E1209 15:20:33.591669 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:20:42 crc kubenswrapper[4770]: E1209 15:20:42.590845 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:20:44 crc kubenswrapper[4770]: I1209 15:20:44.243986 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:20:44 crc kubenswrapper[4770]: I1209 15:20:44.244343 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:20:48 crc kubenswrapper[4770]: E1209 15:20:48.597206 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:20:57 crc kubenswrapper[4770]: E1209 15:20:57.592570 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:20:59 crc kubenswrapper[4770]: E1209 15:20:59.590748 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:21:10 crc kubenswrapper[4770]: E1209 15:21:10.590578 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:21:12 crc kubenswrapper[4770]: E1209 15:21:12.590646 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.243372 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.243716 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.243788 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.244639 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3abf703c48387acb3cf79d87fe9831092cfd9613c3799e87d18353fec13b4a0c"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.244709 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://3abf703c48387acb3cf79d87fe9831092cfd9613c3799e87d18353fec13b4a0c" gracePeriod=600 Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.812697 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="3abf703c48387acb3cf79d87fe9831092cfd9613c3799e87d18353fec13b4a0c" exitCode=0 Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.812769 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"3abf703c48387acb3cf79d87fe9831092cfd9613c3799e87d18353fec13b4a0c"} Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.813123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef"} Dec 09 15:21:14 crc kubenswrapper[4770]: I1209 15:21:14.813156 4770 scope.go:117] "RemoveContainer" containerID="11cc72bc2c37c68eb07f4063512661104d259bda73b9a5ee88502a0be627e739" Dec 09 15:21:24 crc kubenswrapper[4770]: E1209 15:21:24.591082 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:21:25 crc kubenswrapper[4770]: E1209 15:21:25.590065 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:21:35 crc kubenswrapper[4770]: E1209 15:21:35.591309 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:21:39 crc kubenswrapper[4770]: E1209 15:21:39.591904 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:21:47 crc kubenswrapper[4770]: E1209 15:21:47.590691 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:21:51 crc kubenswrapper[4770]: E1209 15:21:51.590996 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:22:01 crc kubenswrapper[4770]: E1209 15:22:01.591361 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:22:05 crc kubenswrapper[4770]: E1209 15:22:05.589877 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:22:12 crc kubenswrapper[4770]: E1209 15:22:12.590144 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:22:19 crc kubenswrapper[4770]: E1209 15:22:19.591892 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:22:25 crc kubenswrapper[4770]: E1209 15:22:25.591629 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:22:30 crc kubenswrapper[4770]: I1209 15:22:30.596149 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:22:30 crc kubenswrapper[4770]: E1209 15:22:30.718027 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:22:30 crc kubenswrapper[4770]: E1209 15:22:30.718104 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:22:30 crc kubenswrapper[4770]: E1209 15:22:30.718296 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:22:30 crc kubenswrapper[4770]: E1209 15:22:30.719510 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:22:39 crc kubenswrapper[4770]: E1209 15:22:39.590933 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:22:46 crc kubenswrapper[4770]: E1209 15:22:46.594486 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:22:50 crc kubenswrapper[4770]: E1209 15:22:50.721383 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:22:50 crc kubenswrapper[4770]: E1209 15:22:50.722213 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:22:50 crc kubenswrapper[4770]: E1209 15:22:50.722507 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:22:50 crc kubenswrapper[4770]: E1209 15:22:50.723819 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:22:59 crc kubenswrapper[4770]: E1209 15:22:59.590406 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:23:01 crc kubenswrapper[4770]: I1209 15:23:01.934493 4770 generic.go:334] "Generic (PLEG): container finished" podID="89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d" containerID="a4d559bf48c1969bed694f7ffb856046aa7cfa53c00dc84ccbaa0274f9fff8aa" exitCode=2 Dec 09 15:23:01 crc kubenswrapper[4770]: I1209 15:23:01.934574 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" event={"ID":"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d","Type":"ContainerDied","Data":"a4d559bf48c1969bed694f7ffb856046aa7cfa53c00dc84ccbaa0274f9fff8aa"} Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.491693 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.691216 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pd2f\" (UniqueName: \"kubernetes.io/projected/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-kube-api-access-6pd2f\") pod \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.691333 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-ssh-key\") pod \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.691561 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-inventory\") pod \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\" (UID: \"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d\") " Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.699239 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-kube-api-access-6pd2f" (OuterVolumeSpecName: "kube-api-access-6pd2f") pod "89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d" (UID: "89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d"). InnerVolumeSpecName "kube-api-access-6pd2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.734215 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d" (UID: "89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.736497 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-inventory" (OuterVolumeSpecName: "inventory") pod "89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d" (UID: "89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.795212 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.795247 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 15:23:03 crc kubenswrapper[4770]: I1209 15:23:03.795262 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pd2f\" (UniqueName: \"kubernetes.io/projected/89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d-kube-api-access-6pd2f\") on node \"crc\" DevicePath \"\"" Dec 09 15:23:04 crc kubenswrapper[4770]: I1209 15:23:04.049277 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" event={"ID":"89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d","Type":"ContainerDied","Data":"71518faed5d5be4c76ff33118b2472ab5805f6a46434d8c5a6368555e828416c"} Dec 09 15:23:04 crc kubenswrapper[4770]: I1209 15:23:04.049325 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr" Dec 09 15:23:04 crc kubenswrapper[4770]: I1209 15:23:04.049330 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71518faed5d5be4c76ff33118b2472ab5805f6a46434d8c5a6368555e828416c" Dec 09 15:23:04 crc kubenswrapper[4770]: E1209 15:23:04.592306 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:23:11 crc kubenswrapper[4770]: E1209 15:23:11.591225 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:23:14 crc kubenswrapper[4770]: I1209 15:23:14.243784 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:23:14 crc kubenswrapper[4770]: I1209 15:23:14.244105 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:23:18 crc kubenswrapper[4770]: E1209 15:23:18.598859 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:23:24 crc kubenswrapper[4770]: E1209 15:23:24.590555 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:23:31 crc kubenswrapper[4770]: E1209 15:23:31.596246 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:23:38 crc kubenswrapper[4770]: E1209 15:23:38.606090 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:23:44 crc kubenswrapper[4770]: I1209 15:23:44.243128 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:23:44 crc kubenswrapper[4770]: I1209 15:23:44.243538 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:23:45 crc kubenswrapper[4770]: E1209 15:23:45.591339 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:23:50 crc kubenswrapper[4770]: E1209 15:23:50.591312 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:23:58 crc kubenswrapper[4770]: E1209 15:23:58.597423 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:24:04 crc kubenswrapper[4770]: E1209 15:24:04.591705 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:24:10 crc kubenswrapper[4770]: E1209 15:24:10.592393 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.244097 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.244782 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.244849 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.245837 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.245912 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" gracePeriod=600 Dec 09 15:24:14 crc kubenswrapper[4770]: E1209 15:24:14.372961 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.809179 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" exitCode=0 Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.809233 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef"} Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.809318 4770 scope.go:117] "RemoveContainer" containerID="3abf703c48387acb3cf79d87fe9831092cfd9613c3799e87d18353fec13b4a0c" Dec 09 15:24:14 crc kubenswrapper[4770]: I1209 15:24:14.810013 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:24:14 crc kubenswrapper[4770]: E1209 15:24:14.810317 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:24:15 crc kubenswrapper[4770]: E1209 15:24:15.589853 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.028173 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7"] Dec 09 15:24:21 crc kubenswrapper[4770]: E1209 15:24:21.029332 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerName="extract-utilities" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.029358 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerName="extract-utilities" Dec 09 15:24:21 crc kubenswrapper[4770]: E1209 15:24:21.029380 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerName="extract-content" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.029391 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerName="extract-content" Dec 09 15:24:21 crc kubenswrapper[4770]: E1209 15:24:21.029416 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.029428 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:24:21 crc kubenswrapper[4770]: E1209 15:24:21.029465 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerName="registry-server" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.029473 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerName="registry-server" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.029784 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.029802 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c56a16-6650-4c5b-948b-c76f9f0d0076" containerName="registry-server" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.030857 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.033150 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.033339 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.033437 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.033462 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.083843 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7"] Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.180015 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.180086 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.180171 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72gjd\" (UniqueName: \"kubernetes.io/projected/b668c218-32c2-4a09-80d6-f98e619550bb-kube-api-access-72gjd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.282535 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72gjd\" (UniqueName: \"kubernetes.io/projected/b668c218-32c2-4a09-80d6-f98e619550bb-kube-api-access-72gjd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.282690 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.282739 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.289792 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.290226 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.302672 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72gjd\" (UniqueName: \"kubernetes.io/projected/b668c218-32c2-4a09-80d6-f98e619550bb-kube-api-access-72gjd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.355103 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:24:21 crc kubenswrapper[4770]: I1209 15:24:21.923552 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7"] Dec 09 15:24:22 crc kubenswrapper[4770]: I1209 15:24:22.904308 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" event={"ID":"b668c218-32c2-4a09-80d6-f98e619550bb","Type":"ContainerStarted","Data":"98d8c354a1296d2a7f58559e9082d765438bf9fcc426aa784b4b32261a26a81c"} Dec 09 15:24:22 crc kubenswrapper[4770]: I1209 15:24:22.904687 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" event={"ID":"b668c218-32c2-4a09-80d6-f98e619550bb","Type":"ContainerStarted","Data":"b136e4ab26dc11f3deede037a723377b839192b53cf02bcbac2ddafe619169fa"} Dec 09 15:24:22 crc kubenswrapper[4770]: I1209 15:24:22.930791 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" podStartSLOduration=1.471316856 podStartE2EDuration="1.930703415s" podCreationTimestamp="2025-12-09 15:24:21 +0000 UTC" firstStartedPulling="2025-12-09 15:24:21.914389875 +0000 UTC m=+3693.810592011" lastFinishedPulling="2025-12-09 15:24:22.373776434 +0000 UTC m=+3694.269978570" observedRunningTime="2025-12-09 15:24:22.920281868 +0000 UTC m=+3694.816484014" watchObservedRunningTime="2025-12-09 15:24:22.930703415 +0000 UTC m=+3694.826905561" Dec 09 15:24:24 crc kubenswrapper[4770]: E1209 15:24:24.590982 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:24:26 crc kubenswrapper[4770]: I1209 15:24:26.588210 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:24:26 crc kubenswrapper[4770]: E1209 15:24:26.588713 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:24:28 crc kubenswrapper[4770]: E1209 15:24:28.607512 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:24:38 crc kubenswrapper[4770]: E1209 15:24:38.597234 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:24:39 crc kubenswrapper[4770]: E1209 15:24:39.589936 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:24:41 crc kubenswrapper[4770]: I1209 15:24:41.588426 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:24:41 crc kubenswrapper[4770]: E1209 15:24:41.589005 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:24:49 crc kubenswrapper[4770]: E1209 15:24:49.590592 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:24:54 crc kubenswrapper[4770]: E1209 15:24:54.591383 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:24:56 crc kubenswrapper[4770]: I1209 15:24:56.589840 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:24:56 crc kubenswrapper[4770]: E1209 15:24:56.590695 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:25:03 crc kubenswrapper[4770]: E1209 15:25:03.591198 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:25:09 crc kubenswrapper[4770]: I1209 15:25:09.593853 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:25:09 crc kubenswrapper[4770]: E1209 15:25:09.594655 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:25:09 crc kubenswrapper[4770]: E1209 15:25:09.595304 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:25:14 crc kubenswrapper[4770]: E1209 15:25:14.593161 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:25:21 crc kubenswrapper[4770]: E1209 15:25:21.594654 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:25:23 crc kubenswrapper[4770]: I1209 15:25:23.589202 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:25:23 crc kubenswrapper[4770]: E1209 15:25:23.589832 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:25:25 crc kubenswrapper[4770]: E1209 15:25:25.590066 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:25:35 crc kubenswrapper[4770]: E1209 15:25:35.590930 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:25:36 crc kubenswrapper[4770]: I1209 15:25:36.588317 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:25:36 crc kubenswrapper[4770]: E1209 15:25:36.588877 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:25:37 crc kubenswrapper[4770]: E1209 15:25:37.590504 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:25:47 crc kubenswrapper[4770]: E1209 15:25:47.591358 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:25:50 crc kubenswrapper[4770]: I1209 15:25:50.588843 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:25:50 crc kubenswrapper[4770]: E1209 15:25:50.589578 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:25:51 crc kubenswrapper[4770]: E1209 15:25:51.590704 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:25:59 crc kubenswrapper[4770]: E1209 15:25:59.591596 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:26:02 crc kubenswrapper[4770]: I1209 15:26:02.588813 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:26:02 crc kubenswrapper[4770]: E1209 15:26:02.589750 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:26:02 crc kubenswrapper[4770]: E1209 15:26:02.591100 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:26:10 crc kubenswrapper[4770]: E1209 15:26:10.591179 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:26:16 crc kubenswrapper[4770]: E1209 15:26:16.595956 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:26:17 crc kubenswrapper[4770]: I1209 15:26:17.588458 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:26:17 crc kubenswrapper[4770]: E1209 15:26:17.589060 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:26:22 crc kubenswrapper[4770]: E1209 15:26:22.590787 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:26:28 crc kubenswrapper[4770]: I1209 15:26:28.597494 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:26:28 crc kubenswrapper[4770]: E1209 15:26:28.598219 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:26:32 crc kubenswrapper[4770]: E1209 15:26:32.591589 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:26:37 crc kubenswrapper[4770]: E1209 15:26:37.591666 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:26:42 crc kubenswrapper[4770]: I1209 15:26:42.589841 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:26:42 crc kubenswrapper[4770]: E1209 15:26:42.590813 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:26:43 crc kubenswrapper[4770]: E1209 15:26:43.607263 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:26:51 crc kubenswrapper[4770]: E1209 15:26:51.591059 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:26:57 crc kubenswrapper[4770]: I1209 15:26:57.588455 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:26:57 crc kubenswrapper[4770]: E1209 15:26:57.589353 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:26:58 crc kubenswrapper[4770]: E1209 15:26:58.598507 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:27:05 crc kubenswrapper[4770]: E1209 15:27:05.590956 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:27:10 crc kubenswrapper[4770]: E1209 15:27:10.591327 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:27:11 crc kubenswrapper[4770]: I1209 15:27:11.590298 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:27:11 crc kubenswrapper[4770]: E1209 15:27:11.591340 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:27:20 crc kubenswrapper[4770]: E1209 15:27:20.591923 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:27:23 crc kubenswrapper[4770]: E1209 15:27:23.590570 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:27:25 crc kubenswrapper[4770]: I1209 15:27:25.588337 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:27:25 crc kubenswrapper[4770]: E1209 15:27:25.588951 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:27:35 crc kubenswrapper[4770]: I1209 15:27:35.590042 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:27:35 crc kubenswrapper[4770]: E1209 15:27:35.684916 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:27:35 crc kubenswrapper[4770]: E1209 15:27:35.684992 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:27:35 crc kubenswrapper[4770]: E1209 15:27:35.685182 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:27:35 crc kubenswrapper[4770]: E1209 15:27:35.686625 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:27:36 crc kubenswrapper[4770]: E1209 15:27:36.591224 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:27:39 crc kubenswrapper[4770]: I1209 15:27:39.588329 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:27:39 crc kubenswrapper[4770]: E1209 15:27:39.589041 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:27:47 crc kubenswrapper[4770]: E1209 15:27:47.590702 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:27:49 crc kubenswrapper[4770]: E1209 15:27:49.589812 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:27:54 crc kubenswrapper[4770]: I1209 15:27:54.588656 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:27:54 crc kubenswrapper[4770]: E1209 15:27:54.589526 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:28:01 crc kubenswrapper[4770]: E1209 15:28:01.591606 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:28:02 crc kubenswrapper[4770]: E1209 15:28:02.720492 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:28:02 crc kubenswrapper[4770]: E1209 15:28:02.720837 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:28:02 crc kubenswrapper[4770]: E1209 15:28:02.720961 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:28:02 crc kubenswrapper[4770]: E1209 15:28:02.722167 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:28:05 crc kubenswrapper[4770]: I1209 15:28:05.588706 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:28:05 crc kubenswrapper[4770]: E1209 15:28:05.589634 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:28:13 crc kubenswrapper[4770]: E1209 15:28:13.591893 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:28:16 crc kubenswrapper[4770]: I1209 15:28:16.589629 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:28:16 crc kubenswrapper[4770]: E1209 15:28:16.590404 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:28:17 crc kubenswrapper[4770]: E1209 15:28:17.591408 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:28:27 crc kubenswrapper[4770]: E1209 15:28:27.591562 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:28:29 crc kubenswrapper[4770]: I1209 15:28:29.588582 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:28:29 crc kubenswrapper[4770]: E1209 15:28:29.589284 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:28:31 crc kubenswrapper[4770]: E1209 15:28:31.591945 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:28:42 crc kubenswrapper[4770]: E1209 15:28:42.590866 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:28:42 crc kubenswrapper[4770]: E1209 15:28:42.591206 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:28:44 crc kubenswrapper[4770]: I1209 15:28:44.589144 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:28:44 crc kubenswrapper[4770]: E1209 15:28:44.589438 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:28:53 crc kubenswrapper[4770]: E1209 15:28:53.590481 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:28:55 crc kubenswrapper[4770]: I1209 15:28:55.588296 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:28:55 crc kubenswrapper[4770]: E1209 15:28:55.588856 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:28:56 crc kubenswrapper[4770]: E1209 15:28:56.592785 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:29:04 crc kubenswrapper[4770]: E1209 15:29:04.590022 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:29:07 crc kubenswrapper[4770]: E1209 15:29:07.591307 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:29:09 crc kubenswrapper[4770]: I1209 15:29:09.588773 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:29:09 crc kubenswrapper[4770]: E1209 15:29:09.590581 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:29:18 crc kubenswrapper[4770]: E1209 15:29:18.593280 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:29:21 crc kubenswrapper[4770]: I1209 15:29:21.589058 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:29:21 crc kubenswrapper[4770]: E1209 15:29:21.591571 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:29:22 crc kubenswrapper[4770]: I1209 15:29:22.218625 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"be8ecc0800bdb4292baf6436014d7cf958d7f7a64d5e6e70813f3c8b4b20ecbe"} Dec 09 15:29:29 crc kubenswrapper[4770]: I1209 15:29:29.963205 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-58zn8"] Dec 09 15:29:29 crc kubenswrapper[4770]: I1209 15:29:29.983938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.048232 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-utilities\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.048425 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xsd\" (UniqueName: \"kubernetes.io/projected/3fede375-cfad-4476-9ca2-59f6cb7c3c97-kube-api-access-t2xsd\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.048573 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-catalog-content\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.053553 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58zn8"] Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.149623 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-utilities\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.150002 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xsd\" (UniqueName: \"kubernetes.io/projected/3fede375-cfad-4476-9ca2-59f6cb7c3c97-kube-api-access-t2xsd\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.150059 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-catalog-content\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.150391 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-utilities\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.150485 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-catalog-content\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.171532 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xsd\" (UniqueName: \"kubernetes.io/projected/3fede375-cfad-4476-9ca2-59f6cb7c3c97-kube-api-access-t2xsd\") pod \"community-operators-58zn8\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:30 crc kubenswrapper[4770]: I1209 15:29:30.343245 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:31 crc kubenswrapper[4770]: I1209 15:29:31.010403 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58zn8"] Dec 09 15:29:31 crc kubenswrapper[4770]: W1209 15:29:31.010410 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fede375_cfad_4476_9ca2_59f6cb7c3c97.slice/crio-6c9f0cb8321e9b842eceb2b5662c73cb15ece485b47476415cf81e356b324186 WatchSource:0}: Error finding container 6c9f0cb8321e9b842eceb2b5662c73cb15ece485b47476415cf81e356b324186: Status 404 returned error can't find the container with id 6c9f0cb8321e9b842eceb2b5662c73cb15ece485b47476415cf81e356b324186 Dec 09 15:29:31 crc kubenswrapper[4770]: I1209 15:29:31.331443 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58zn8" event={"ID":"3fede375-cfad-4476-9ca2-59f6cb7c3c97","Type":"ContainerStarted","Data":"6c9f0cb8321e9b842eceb2b5662c73cb15ece485b47476415cf81e356b324186"} Dec 09 15:29:31 crc kubenswrapper[4770]: E1209 15:29:31.591198 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:29:32 crc kubenswrapper[4770]: I1209 15:29:32.342886 4770 generic.go:334] "Generic (PLEG): container finished" podID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerID="9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293" exitCode=0 Dec 09 15:29:32 crc kubenswrapper[4770]: I1209 15:29:32.342946 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58zn8" event={"ID":"3fede375-cfad-4476-9ca2-59f6cb7c3c97","Type":"ContainerDied","Data":"9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293"} Dec 09 15:29:33 crc kubenswrapper[4770]: I1209 15:29:33.355261 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58zn8" event={"ID":"3fede375-cfad-4476-9ca2-59f6cb7c3c97","Type":"ContainerStarted","Data":"4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc"} Dec 09 15:29:35 crc kubenswrapper[4770]: I1209 15:29:35.375799 4770 generic.go:334] "Generic (PLEG): container finished" podID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerID="4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc" exitCode=0 Dec 09 15:29:35 crc kubenswrapper[4770]: I1209 15:29:35.375926 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58zn8" event={"ID":"3fede375-cfad-4476-9ca2-59f6cb7c3c97","Type":"ContainerDied","Data":"4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc"} Dec 09 15:29:35 crc kubenswrapper[4770]: E1209 15:29:35.589702 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:29:36 crc kubenswrapper[4770]: I1209 15:29:36.388828 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58zn8" event={"ID":"3fede375-cfad-4476-9ca2-59f6cb7c3c97","Type":"ContainerStarted","Data":"0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26"} Dec 09 15:29:36 crc kubenswrapper[4770]: I1209 15:29:36.417549 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-58zn8" podStartSLOduration=3.893791487 podStartE2EDuration="7.417434379s" podCreationTimestamp="2025-12-09 15:29:29 +0000 UTC" firstStartedPulling="2025-12-09 15:29:32.345233346 +0000 UTC m=+4004.241435482" lastFinishedPulling="2025-12-09 15:29:35.868876238 +0000 UTC m=+4007.765078374" observedRunningTime="2025-12-09 15:29:36.405971129 +0000 UTC m=+4008.302173285" watchObservedRunningTime="2025-12-09 15:29:36.417434379 +0000 UTC m=+4008.313636515" Dec 09 15:29:40 crc kubenswrapper[4770]: I1209 15:29:40.344313 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:40 crc kubenswrapper[4770]: I1209 15:29:40.345144 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:40 crc kubenswrapper[4770]: I1209 15:29:40.421599 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:43 crc kubenswrapper[4770]: E1209 15:29:43.593035 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:29:50 crc kubenswrapper[4770]: I1209 15:29:50.391007 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:50 crc kubenswrapper[4770]: I1209 15:29:50.450696 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-58zn8"] Dec 09 15:29:50 crc kubenswrapper[4770]: I1209 15:29:50.575600 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-58zn8" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerName="registry-server" containerID="cri-o://0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26" gracePeriod=2 Dec 09 15:29:50 crc kubenswrapper[4770]: E1209 15:29:50.591110 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.117805 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.173879 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-utilities\") pod \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.174307 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-catalog-content\") pod \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.174468 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2xsd\" (UniqueName: \"kubernetes.io/projected/3fede375-cfad-4476-9ca2-59f6cb7c3c97-kube-api-access-t2xsd\") pod \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\" (UID: \"3fede375-cfad-4476-9ca2-59f6cb7c3c97\") " Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.175941 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-utilities" (OuterVolumeSpecName: "utilities") pod "3fede375-cfad-4476-9ca2-59f6cb7c3c97" (UID: "3fede375-cfad-4476-9ca2-59f6cb7c3c97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.203048 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fede375-cfad-4476-9ca2-59f6cb7c3c97-kube-api-access-t2xsd" (OuterVolumeSpecName: "kube-api-access-t2xsd") pod "3fede375-cfad-4476-9ca2-59f6cb7c3c97" (UID: "3fede375-cfad-4476-9ca2-59f6cb7c3c97"). InnerVolumeSpecName "kube-api-access-t2xsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.246930 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fede375-cfad-4476-9ca2-59f6cb7c3c97" (UID: "3fede375-cfad-4476-9ca2-59f6cb7c3c97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.277980 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.278032 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fede375-cfad-4476-9ca2-59f6cb7c3c97-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.278076 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2xsd\" (UniqueName: \"kubernetes.io/projected/3fede375-cfad-4476-9ca2-59f6cb7c3c97-kube-api-access-t2xsd\") on node \"crc\" DevicePath \"\"" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.590199 4770 generic.go:334] "Generic (PLEG): container finished" podID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerID="0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26" exitCode=0 Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.590237 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58zn8" event={"ID":"3fede375-cfad-4476-9ca2-59f6cb7c3c97","Type":"ContainerDied","Data":"0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26"} Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.590263 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58zn8" event={"ID":"3fede375-cfad-4476-9ca2-59f6cb7c3c97","Type":"ContainerDied","Data":"6c9f0cb8321e9b842eceb2b5662c73cb15ece485b47476415cf81e356b324186"} Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.590280 4770 scope.go:117] "RemoveContainer" containerID="0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.590311 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58zn8" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.620120 4770 scope.go:117] "RemoveContainer" containerID="4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.657939 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-58zn8"] Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.664860 4770 scope.go:117] "RemoveContainer" containerID="9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.668175 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-58zn8"] Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.732156 4770 scope.go:117] "RemoveContainer" containerID="0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26" Dec 09 15:29:51 crc kubenswrapper[4770]: E1209 15:29:51.733447 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26\": container with ID starting with 0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26 not found: ID does not exist" containerID="0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.733493 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26"} err="failed to get container status \"0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26\": rpc error: code = NotFound desc = could not find container \"0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26\": container with ID starting with 0d30b9abd7810a7f6d93de4c57ba050b3c698f3a68b36a0cea84ef7b3a834c26 not found: ID does not exist" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.733525 4770 scope.go:117] "RemoveContainer" containerID="4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc" Dec 09 15:29:51 crc kubenswrapper[4770]: E1209 15:29:51.734345 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc\": container with ID starting with 4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc not found: ID does not exist" containerID="4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.734374 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc"} err="failed to get container status \"4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc\": rpc error: code = NotFound desc = could not find container \"4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc\": container with ID starting with 4900012d34721dbbca8e4d7815c80435f7b82192d08c125bedc01801784f72dc not found: ID does not exist" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.734391 4770 scope.go:117] "RemoveContainer" containerID="9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293" Dec 09 15:29:51 crc kubenswrapper[4770]: E1209 15:29:51.736587 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293\": container with ID starting with 9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293 not found: ID does not exist" containerID="9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293" Dec 09 15:29:51 crc kubenswrapper[4770]: I1209 15:29:51.736609 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293"} err="failed to get container status \"9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293\": rpc error: code = NotFound desc = could not find container \"9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293\": container with ID starting with 9451b7cea0631f92591d58177f6c4f75b4d09959136cc363ca6b2feb478bf293 not found: ID does not exist" Dec 09 15:29:52 crc kubenswrapper[4770]: I1209 15:29:52.602395 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" path="/var/lib/kubelet/pods/3fede375-cfad-4476-9ca2-59f6cb7c3c97/volumes" Dec 09 15:29:55 crc kubenswrapper[4770]: E1209 15:29:55.590860 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.166344 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7"] Dec 09 15:30:00 crc kubenswrapper[4770]: E1209 15:30:00.167372 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerName="extract-utilities" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.167400 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerName="extract-utilities" Dec 09 15:30:00 crc kubenswrapper[4770]: E1209 15:30:00.167424 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerName="extract-content" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.167430 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerName="extract-content" Dec 09 15:30:00 crc kubenswrapper[4770]: E1209 15:30:00.167441 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerName="registry-server" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.167447 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerName="registry-server" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.167678 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fede375-cfad-4476-9ca2-59f6cb7c3c97" containerName="registry-server" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.168540 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.170416 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.170787 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.207219 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7"] Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.285767 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hq68\" (UniqueName: \"kubernetes.io/projected/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-kube-api-access-8hq68\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.285829 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-secret-volume\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.285892 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-config-volume\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.387391 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hq68\" (UniqueName: \"kubernetes.io/projected/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-kube-api-access-8hq68\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.387471 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-secret-volume\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.387535 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-config-volume\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.388701 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-config-volume\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.953525 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-secret-volume\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:00 crc kubenswrapper[4770]: I1209 15:30:00.955126 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hq68\" (UniqueName: \"kubernetes.io/projected/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-kube-api-access-8hq68\") pod \"collect-profiles-29421570-7zrh7\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:01 crc kubenswrapper[4770]: I1209 15:30:01.119964 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:01 crc kubenswrapper[4770]: I1209 15:30:01.675393 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7"] Dec 09 15:30:01 crc kubenswrapper[4770]: I1209 15:30:01.700250 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" event={"ID":"7291ac14-e8e0-4517-8ca4-0ca180c46b1d","Type":"ContainerStarted","Data":"44a901cbc991970b3bb8bf19f3bb297d92436b36af94ba527a731c530a33e35f"} Dec 09 15:30:02 crc kubenswrapper[4770]: I1209 15:30:02.711551 4770 generic.go:334] "Generic (PLEG): container finished" podID="7291ac14-e8e0-4517-8ca4-0ca180c46b1d" containerID="38050cc8106d5ff635b5f70405442cfa0474b43b8200925761417a2dd48d5cee" exitCode=0 Dec 09 15:30:02 crc kubenswrapper[4770]: I1209 15:30:02.711749 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" event={"ID":"7291ac14-e8e0-4517-8ca4-0ca180c46b1d","Type":"ContainerDied","Data":"38050cc8106d5ff635b5f70405442cfa0474b43b8200925761417a2dd48d5cee"} Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.193298 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.266960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hq68\" (UniqueName: \"kubernetes.io/projected/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-kube-api-access-8hq68\") pod \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.267068 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-config-volume\") pod \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.267207 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-secret-volume\") pod \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\" (UID: \"7291ac14-e8e0-4517-8ca4-0ca180c46b1d\") " Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.267780 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "7291ac14-e8e0-4517-8ca4-0ca180c46b1d" (UID: "7291ac14-e8e0-4517-8ca4-0ca180c46b1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.275880 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7291ac14-e8e0-4517-8ca4-0ca180c46b1d" (UID: "7291ac14-e8e0-4517-8ca4-0ca180c46b1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.276062 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-kube-api-access-8hq68" (OuterVolumeSpecName: "kube-api-access-8hq68") pod "7291ac14-e8e0-4517-8ca4-0ca180c46b1d" (UID: "7291ac14-e8e0-4517-8ca4-0ca180c46b1d"). InnerVolumeSpecName "kube-api-access-8hq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.370273 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.370317 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.370332 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hq68\" (UniqueName: \"kubernetes.io/projected/7291ac14-e8e0-4517-8ca4-0ca180c46b1d-kube-api-access-8hq68\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.739036 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" event={"ID":"7291ac14-e8e0-4517-8ca4-0ca180c46b1d","Type":"ContainerDied","Data":"44a901cbc991970b3bb8bf19f3bb297d92436b36af94ba527a731c530a33e35f"} Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.739098 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44a901cbc991970b3bb8bf19f3bb297d92436b36af94ba527a731c530a33e35f" Dec 09 15:30:04 crc kubenswrapper[4770]: I1209 15:30:04.739212 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421570-7zrh7" Dec 09 15:30:05 crc kubenswrapper[4770]: I1209 15:30:05.309914 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws"] Dec 09 15:30:05 crc kubenswrapper[4770]: I1209 15:30:05.320515 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421525-krtws"] Dec 09 15:30:05 crc kubenswrapper[4770]: E1209 15:30:05.590186 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:30:06 crc kubenswrapper[4770]: I1209 15:30:06.603135 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7044d2d-0426-4e15-ae14-5734939f8884" path="/var/lib/kubelet/pods/b7044d2d-0426-4e15-ae14-5734939f8884/volumes" Dec 09 15:30:10 crc kubenswrapper[4770]: E1209 15:30:10.591253 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:30:20 crc kubenswrapper[4770]: E1209 15:30:20.591777 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:30:23 crc kubenswrapper[4770]: I1209 15:30:23.759657 4770 scope.go:117] "RemoveContainer" containerID="7de7e0cb63961f34d9bdfc63ec82990abbabbfd07fb49824546ce2f6ddfc6d36" Dec 09 15:30:24 crc kubenswrapper[4770]: E1209 15:30:24.590400 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.174851 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4vt9v"] Dec 09 15:30:32 crc kubenswrapper[4770]: E1209 15:30:32.175746 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7291ac14-e8e0-4517-8ca4-0ca180c46b1d" containerName="collect-profiles" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.175761 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7291ac14-e8e0-4517-8ca4-0ca180c46b1d" containerName="collect-profiles" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.175951 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7291ac14-e8e0-4517-8ca4-0ca180c46b1d" containerName="collect-profiles" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.177437 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.205900 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-utilities\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.206119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-catalog-content\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.206176 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdv5\" (UniqueName: \"kubernetes.io/projected/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-kube-api-access-8vdv5\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.239334 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vt9v"] Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.309546 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-catalog-content\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.309779 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vdv5\" (UniqueName: \"kubernetes.io/projected/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-kube-api-access-8vdv5\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.310140 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-catalog-content\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.311512 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-utilities\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.314073 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-utilities\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.329630 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vdv5\" (UniqueName: \"kubernetes.io/projected/a277c80b-567a-4a1b-84ef-d0ee49dfe9bb-kube-api-access-8vdv5\") pod \"redhat-operators-4vt9v\" (UID: \"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb\") " pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:32 crc kubenswrapper[4770]: I1209 15:30:32.514555 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:33 crc kubenswrapper[4770]: I1209 15:30:33.100642 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vt9v"] Dec 09 15:30:33 crc kubenswrapper[4770]: W1209 15:30:33.102622 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda277c80b_567a_4a1b_84ef_d0ee49dfe9bb.slice/crio-1aac45ddb1da9f1908abda77296db22d585341fe9c37d6de62463c2b7d613145 WatchSource:0}: Error finding container 1aac45ddb1da9f1908abda77296db22d585341fe9c37d6de62463c2b7d613145: Status 404 returned error can't find the container with id 1aac45ddb1da9f1908abda77296db22d585341fe9c37d6de62463c2b7d613145 Dec 09 15:30:34 crc kubenswrapper[4770]: I1209 15:30:34.066571 4770 generic.go:334] "Generic (PLEG): container finished" podID="a277c80b-567a-4a1b-84ef-d0ee49dfe9bb" containerID="2c07c02ebb9d4b473d0d96986e4e783866acf695b904373e6760af3935e0623c" exitCode=0 Dec 09 15:30:34 crc kubenswrapper[4770]: I1209 15:30:34.066803 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vt9v" event={"ID":"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb","Type":"ContainerDied","Data":"2c07c02ebb9d4b473d0d96986e4e783866acf695b904373e6760af3935e0623c"} Dec 09 15:30:34 crc kubenswrapper[4770]: I1209 15:30:34.066960 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vt9v" event={"ID":"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb","Type":"ContainerStarted","Data":"1aac45ddb1da9f1908abda77296db22d585341fe9c37d6de62463c2b7d613145"} Dec 09 15:30:35 crc kubenswrapper[4770]: E1209 15:30:35.591008 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:30:38 crc kubenswrapper[4770]: I1209 15:30:38.112013 4770 generic.go:334] "Generic (PLEG): container finished" podID="b668c218-32c2-4a09-80d6-f98e619550bb" containerID="98d8c354a1296d2a7f58559e9082d765438bf9fcc426aa784b4b32261a26a81c" exitCode=2 Dec 09 15:30:38 crc kubenswrapper[4770]: I1209 15:30:38.112212 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" event={"ID":"b668c218-32c2-4a09-80d6-f98e619550bb","Type":"ContainerDied","Data":"98d8c354a1296d2a7f58559e9082d765438bf9fcc426aa784b4b32261a26a81c"} Dec 09 15:30:39 crc kubenswrapper[4770]: E1209 15:30:39.591288 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:30:41 crc kubenswrapper[4770]: I1209 15:30:41.940696 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.030115 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-inventory\") pod \"b668c218-32c2-4a09-80d6-f98e619550bb\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.030400 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-ssh-key\") pod \"b668c218-32c2-4a09-80d6-f98e619550bb\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.030512 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72gjd\" (UniqueName: \"kubernetes.io/projected/b668c218-32c2-4a09-80d6-f98e619550bb-kube-api-access-72gjd\") pod \"b668c218-32c2-4a09-80d6-f98e619550bb\" (UID: \"b668c218-32c2-4a09-80d6-f98e619550bb\") " Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.039336 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b668c218-32c2-4a09-80d6-f98e619550bb-kube-api-access-72gjd" (OuterVolumeSpecName: "kube-api-access-72gjd") pod "b668c218-32c2-4a09-80d6-f98e619550bb" (UID: "b668c218-32c2-4a09-80d6-f98e619550bb"). InnerVolumeSpecName "kube-api-access-72gjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.065331 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-inventory" (OuterVolumeSpecName: "inventory") pod "b668c218-32c2-4a09-80d6-f98e619550bb" (UID: "b668c218-32c2-4a09-80d6-f98e619550bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.065662 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b668c218-32c2-4a09-80d6-f98e619550bb" (UID: "b668c218-32c2-4a09-80d6-f98e619550bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.132794 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.133126 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72gjd\" (UniqueName: \"kubernetes.io/projected/b668c218-32c2-4a09-80d6-f98e619550bb-kube-api-access-72gjd\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.133240 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b668c218-32c2-4a09-80d6-f98e619550bb-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.156742 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.156695 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7" event={"ID":"b668c218-32c2-4a09-80d6-f98e619550bb","Type":"ContainerDied","Data":"b136e4ab26dc11f3deede037a723377b839192b53cf02bcbac2ddafe619169fa"} Dec 09 15:30:42 crc kubenswrapper[4770]: I1209 15:30:42.157151 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b136e4ab26dc11f3deede037a723377b839192b53cf02bcbac2ddafe619169fa" Dec 09 15:30:44 crc kubenswrapper[4770]: I1209 15:30:44.185658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vt9v" event={"ID":"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb","Type":"ContainerStarted","Data":"466845f91c0f687ef31c2463a9b4d5d4ec010dcee66e46983b69b10f816fbaeb"} Dec 09 15:30:45 crc kubenswrapper[4770]: I1209 15:30:45.198416 4770 generic.go:334] "Generic (PLEG): container finished" podID="a277c80b-567a-4a1b-84ef-d0ee49dfe9bb" containerID="466845f91c0f687ef31c2463a9b4d5d4ec010dcee66e46983b69b10f816fbaeb" exitCode=0 Dec 09 15:30:45 crc kubenswrapper[4770]: I1209 15:30:45.198989 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vt9v" event={"ID":"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb","Type":"ContainerDied","Data":"466845f91c0f687ef31c2463a9b4d5d4ec010dcee66e46983b69b10f816fbaeb"} Dec 09 15:30:48 crc kubenswrapper[4770]: I1209 15:30:48.229432 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vt9v" event={"ID":"a277c80b-567a-4a1b-84ef-d0ee49dfe9bb","Type":"ContainerStarted","Data":"24beeaeefb84a2bd4bf53fc181b514a5e6013811df54980e28163e6fb0ec934e"} Dec 09 15:30:48 crc kubenswrapper[4770]: I1209 15:30:48.257526 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4vt9v" podStartSLOduration=3.333931273 podStartE2EDuration="16.257503297s" podCreationTimestamp="2025-12-09 15:30:32 +0000 UTC" firstStartedPulling="2025-12-09 15:30:34.068452022 +0000 UTC m=+4065.964654148" lastFinishedPulling="2025-12-09 15:30:46.992024026 +0000 UTC m=+4078.888226172" observedRunningTime="2025-12-09 15:30:48.255174824 +0000 UTC m=+4080.151376960" watchObservedRunningTime="2025-12-09 15:30:48.257503297 +0000 UTC m=+4080.153705433" Dec 09 15:30:49 crc kubenswrapper[4770]: E1209 15:30:49.590312 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:30:52 crc kubenswrapper[4770]: I1209 15:30:52.515749 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:52 crc kubenswrapper[4770]: I1209 15:30:52.516450 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:52 crc kubenswrapper[4770]: I1209 15:30:52.629591 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:53 crc kubenswrapper[4770]: I1209 15:30:53.353546 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4vt9v" Dec 09 15:30:53 crc kubenswrapper[4770]: I1209 15:30:53.439384 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vt9v"] Dec 09 15:30:53 crc kubenswrapper[4770]: I1209 15:30:53.480465 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48dxs"] Dec 09 15:30:53 crc kubenswrapper[4770]: I1209 15:30:53.480818 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-48dxs" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerName="registry-server" containerID="cri-o://a02234cc9d2949db854ef0e8b65e0855eaab37710ceefa903dc5d10840cb531b" gracePeriod=2 Dec 09 15:30:53 crc kubenswrapper[4770]: E1209 15:30:53.590853 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.312610 4770 generic.go:334] "Generic (PLEG): container finished" podID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerID="a02234cc9d2949db854ef0e8b65e0855eaab37710ceefa903dc5d10840cb531b" exitCode=0 Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.313208 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48dxs" event={"ID":"410c8f09-8472-4bb2-a017-24a1b2e9d6af","Type":"ContainerDied","Data":"a02234cc9d2949db854ef0e8b65e0855eaab37710ceefa903dc5d10840cb531b"} Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.587071 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.663616 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-utilities\") pod \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.663844 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sllmh\" (UniqueName: \"kubernetes.io/projected/410c8f09-8472-4bb2-a017-24a1b2e9d6af-kube-api-access-sllmh\") pod \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.663963 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-catalog-content\") pod \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\" (UID: \"410c8f09-8472-4bb2-a017-24a1b2e9d6af\") " Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.664181 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-utilities" (OuterVolumeSpecName: "utilities") pod "410c8f09-8472-4bb2-a017-24a1b2e9d6af" (UID: "410c8f09-8472-4bb2-a017-24a1b2e9d6af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.665358 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.681676 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410c8f09-8472-4bb2-a017-24a1b2e9d6af-kube-api-access-sllmh" (OuterVolumeSpecName: "kube-api-access-sllmh") pod "410c8f09-8472-4bb2-a017-24a1b2e9d6af" (UID: "410c8f09-8472-4bb2-a017-24a1b2e9d6af"). InnerVolumeSpecName "kube-api-access-sllmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.767116 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sllmh\" (UniqueName: \"kubernetes.io/projected/410c8f09-8472-4bb2-a017-24a1b2e9d6af-kube-api-access-sllmh\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.779278 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "410c8f09-8472-4bb2-a017-24a1b2e9d6af" (UID: "410c8f09-8472-4bb2-a017-24a1b2e9d6af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:30:54 crc kubenswrapper[4770]: I1209 15:30:54.869851 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410c8f09-8472-4bb2-a017-24a1b2e9d6af-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:30:55 crc kubenswrapper[4770]: I1209 15:30:55.324422 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48dxs" event={"ID":"410c8f09-8472-4bb2-a017-24a1b2e9d6af","Type":"ContainerDied","Data":"9c78fdcea2eab0b1098342a49af8a7fecf09632a7483ed0b5ed1687621e6d1aa"} Dec 09 15:30:55 crc kubenswrapper[4770]: I1209 15:30:55.324460 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48dxs" Dec 09 15:30:55 crc kubenswrapper[4770]: I1209 15:30:55.324489 4770 scope.go:117] "RemoveContainer" containerID="a02234cc9d2949db854ef0e8b65e0855eaab37710ceefa903dc5d10840cb531b" Dec 09 15:30:55 crc kubenswrapper[4770]: I1209 15:30:55.347310 4770 scope.go:117] "RemoveContainer" containerID="2c4c48e59cd26453e7785b1412e63154cfbff338f00d4df6c999445cec0b68fc" Dec 09 15:30:55 crc kubenswrapper[4770]: I1209 15:30:55.362753 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48dxs"] Dec 09 15:30:55 crc kubenswrapper[4770]: I1209 15:30:55.374397 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-48dxs"] Dec 09 15:30:55 crc kubenswrapper[4770]: I1209 15:30:55.378854 4770 scope.go:117] "RemoveContainer" containerID="d6e00117e78a4c74a0e2452705af6de840a8915564cc001247a0aae3cb250693" Dec 09 15:30:56 crc kubenswrapper[4770]: I1209 15:30:56.603049 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" path="/var/lib/kubelet/pods/410c8f09-8472-4bb2-a017-24a1b2e9d6af/volumes" Dec 09 15:31:01 crc kubenswrapper[4770]: E1209 15:31:01.590528 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:31:06 crc kubenswrapper[4770]: E1209 15:31:06.593139 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:31:13 crc kubenswrapper[4770]: E1209 15:31:13.592149 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:31:21 crc kubenswrapper[4770]: E1209 15:31:21.589996 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:31:25 crc kubenswrapper[4770]: E1209 15:31:25.590350 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:31:35 crc kubenswrapper[4770]: E1209 15:31:35.590371 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:31:40 crc kubenswrapper[4770]: E1209 15:31:40.591003 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:31:44 crc kubenswrapper[4770]: I1209 15:31:44.243085 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:31:44 crc kubenswrapper[4770]: I1209 15:31:44.244572 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:31:50 crc kubenswrapper[4770]: E1209 15:31:50.591664 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:31:54 crc kubenswrapper[4770]: E1209 15:31:54.591244 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:32:04 crc kubenswrapper[4770]: E1209 15:32:04.591487 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:32:05 crc kubenswrapper[4770]: E1209 15:32:05.591897 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:32:14 crc kubenswrapper[4770]: I1209 15:32:14.243927 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:32:14 crc kubenswrapper[4770]: I1209 15:32:14.245623 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:32:19 crc kubenswrapper[4770]: E1209 15:32:19.591411 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:32:20 crc kubenswrapper[4770]: E1209 15:32:20.590386 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:32:33 crc kubenswrapper[4770]: E1209 15:32:33.590336 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:32:33 crc kubenswrapper[4770]: E1209 15:32:33.590884 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.392145 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d8hv2"] Dec 09 15:32:38 crc kubenswrapper[4770]: E1209 15:32:38.393233 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b668c218-32c2-4a09-80d6-f98e619550bb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.393254 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b668c218-32c2-4a09-80d6-f98e619550bb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:32:38 crc kubenswrapper[4770]: E1209 15:32:38.393281 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerName="registry-server" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.393288 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerName="registry-server" Dec 09 15:32:38 crc kubenswrapper[4770]: E1209 15:32:38.393303 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerName="extract-content" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.393309 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerName="extract-content" Dec 09 15:32:38 crc kubenswrapper[4770]: E1209 15:32:38.393325 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerName="extract-utilities" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.393333 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerName="extract-utilities" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.393566 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b668c218-32c2-4a09-80d6-f98e619550bb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.393577 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="410c8f09-8472-4bb2-a017-24a1b2e9d6af" containerName="registry-server" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.395249 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.421892 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8hv2"] Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.568322 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j79dz\" (UniqueName: \"kubernetes.io/projected/4fe338af-9416-496f-80e1-e99f1d317f17-kube-api-access-j79dz\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.568700 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-utilities\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.569086 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-catalog-content\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.671262 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-catalog-content\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.671341 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j79dz\" (UniqueName: \"kubernetes.io/projected/4fe338af-9416-496f-80e1-e99f1d317f17-kube-api-access-j79dz\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.671453 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-utilities\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.671873 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-catalog-content\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.672266 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-utilities\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.695788 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j79dz\" (UniqueName: \"kubernetes.io/projected/4fe338af-9416-496f-80e1-e99f1d317f17-kube-api-access-j79dz\") pod \"certified-operators-d8hv2\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:38 crc kubenswrapper[4770]: I1209 15:32:38.753553 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:39 crc kubenswrapper[4770]: I1209 15:32:39.342483 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8hv2"] Dec 09 15:32:39 crc kubenswrapper[4770]: I1209 15:32:39.472590 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8hv2" event={"ID":"4fe338af-9416-496f-80e1-e99f1d317f17","Type":"ContainerStarted","Data":"b44a90aac52ff9fbcfa8d45c6fb73d416bea56f93c04265548d8049150ad80c4"} Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.484705 4770 generic.go:334] "Generic (PLEG): container finished" podID="4fe338af-9416-496f-80e1-e99f1d317f17" containerID="1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b" exitCode=0 Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.484934 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8hv2" event={"ID":"4fe338af-9416-496f-80e1-e99f1d317f17","Type":"ContainerDied","Data":"1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b"} Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.487307 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.527644 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zzhmd"] Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.530223 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.539067 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzhmd"] Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.628889 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-utilities\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.628959 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-catalog-content\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.629074 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ksz\" (UniqueName: \"kubernetes.io/projected/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-kube-api-access-d2ksz\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.731547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-utilities\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.732201 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-catalog-content\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.732343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ksz\" (UniqueName: \"kubernetes.io/projected/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-kube-api-access-d2ksz\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.732027 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-utilities\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.732533 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-catalog-content\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.755559 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ksz\" (UniqueName: \"kubernetes.io/projected/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-kube-api-access-d2ksz\") pod \"redhat-marketplace-zzhmd\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:40 crc kubenswrapper[4770]: I1209 15:32:40.864706 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:41 crc kubenswrapper[4770]: I1209 15:32:41.413384 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzhmd"] Dec 09 15:32:41 crc kubenswrapper[4770]: W1209 15:32:41.413706 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e7cb4fb_3562_4fae_95f1_081b2af95fe0.slice/crio-69aecfa635ae47395f2b11e6ea9f8bd14a8464d76e09912831435f069a6b1472 WatchSource:0}: Error finding container 69aecfa635ae47395f2b11e6ea9f8bd14a8464d76e09912831435f069a6b1472: Status 404 returned error can't find the container with id 69aecfa635ae47395f2b11e6ea9f8bd14a8464d76e09912831435f069a6b1472 Dec 09 15:32:41 crc kubenswrapper[4770]: I1209 15:32:41.508840 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzhmd" event={"ID":"6e7cb4fb-3562-4fae-95f1-081b2af95fe0","Type":"ContainerStarted","Data":"69aecfa635ae47395f2b11e6ea9f8bd14a8464d76e09912831435f069a6b1472"} Dec 09 15:32:42 crc kubenswrapper[4770]: I1209 15:32:42.523411 4770 generic.go:334] "Generic (PLEG): container finished" podID="4fe338af-9416-496f-80e1-e99f1d317f17" containerID="0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0" exitCode=0 Dec 09 15:32:42 crc kubenswrapper[4770]: I1209 15:32:42.523700 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8hv2" event={"ID":"4fe338af-9416-496f-80e1-e99f1d317f17","Type":"ContainerDied","Data":"0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0"} Dec 09 15:32:42 crc kubenswrapper[4770]: I1209 15:32:42.525819 4770 generic.go:334] "Generic (PLEG): container finished" podID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerID="b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d" exitCode=0 Dec 09 15:32:42 crc kubenswrapper[4770]: I1209 15:32:42.525857 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzhmd" event={"ID":"6e7cb4fb-3562-4fae-95f1-081b2af95fe0","Type":"ContainerDied","Data":"b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d"} Dec 09 15:32:43 crc kubenswrapper[4770]: I1209 15:32:43.539623 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8hv2" event={"ID":"4fe338af-9416-496f-80e1-e99f1d317f17","Type":"ContainerStarted","Data":"e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6"} Dec 09 15:32:43 crc kubenswrapper[4770]: I1209 15:32:43.543403 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzhmd" event={"ID":"6e7cb4fb-3562-4fae-95f1-081b2af95fe0","Type":"ContainerStarted","Data":"65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac"} Dec 09 15:32:43 crc kubenswrapper[4770]: I1209 15:32:43.567014 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d8hv2" podStartSLOduration=3.128558331 podStartE2EDuration="5.566968582s" podCreationTimestamp="2025-12-09 15:32:38 +0000 UTC" firstStartedPulling="2025-12-09 15:32:40.486970456 +0000 UTC m=+4192.383172602" lastFinishedPulling="2025-12-09 15:32:42.925380717 +0000 UTC m=+4194.821582853" observedRunningTime="2025-12-09 15:32:43.558534954 +0000 UTC m=+4195.454737090" watchObservedRunningTime="2025-12-09 15:32:43.566968582 +0000 UTC m=+4195.463170718" Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.243800 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.244175 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.244224 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.245099 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be8ecc0800bdb4292baf6436014d7cf958d7f7a64d5e6e70813f3c8b4b20ecbe"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.245161 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://be8ecc0800bdb4292baf6436014d7cf958d7f7a64d5e6e70813f3c8b4b20ecbe" gracePeriod=600 Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.557352 4770 generic.go:334] "Generic (PLEG): container finished" podID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerID="65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac" exitCode=0 Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.557605 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzhmd" event={"ID":"6e7cb4fb-3562-4fae-95f1-081b2af95fe0","Type":"ContainerDied","Data":"65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac"} Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.561642 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="be8ecc0800bdb4292baf6436014d7cf958d7f7a64d5e6e70813f3c8b4b20ecbe" exitCode=0 Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.561742 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"be8ecc0800bdb4292baf6436014d7cf958d7f7a64d5e6e70813f3c8b4b20ecbe"} Dec 09 15:32:44 crc kubenswrapper[4770]: I1209 15:32:44.561809 4770 scope.go:117] "RemoveContainer" containerID="d60dcc18cbcfde24ef741f8ff13d02d26976f84ec9f5ad6e15d00d1b4d98aaef" Dec 09 15:32:45 crc kubenswrapper[4770]: I1209 15:32:45.645711 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7"} Dec 09 15:32:45 crc kubenswrapper[4770]: I1209 15:32:45.654409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzhmd" event={"ID":"6e7cb4fb-3562-4fae-95f1-081b2af95fe0","Type":"ContainerStarted","Data":"212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e"} Dec 09 15:32:45 crc kubenswrapper[4770]: I1209 15:32:45.704834 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zzhmd" podStartSLOduration=3.260753938 podStartE2EDuration="5.704804852s" podCreationTimestamp="2025-12-09 15:32:40 +0000 UTC" firstStartedPulling="2025-12-09 15:32:42.527888988 +0000 UTC m=+4194.424091144" lastFinishedPulling="2025-12-09 15:32:44.971939922 +0000 UTC m=+4196.868142058" observedRunningTime="2025-12-09 15:32:45.696038576 +0000 UTC m=+4197.592240732" watchObservedRunningTime="2025-12-09 15:32:45.704804852 +0000 UTC m=+4197.601007008" Dec 09 15:32:45 crc kubenswrapper[4770]: E1209 15:32:45.772779 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:32:45 crc kubenswrapper[4770]: E1209 15:32:45.773369 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:32:45 crc kubenswrapper[4770]: E1209 15:32:45.773669 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:32:45 crc kubenswrapper[4770]: E1209 15:32:45.776162 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:32:47 crc kubenswrapper[4770]: E1209 15:32:47.592105 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:32:48 crc kubenswrapper[4770]: I1209 15:32:48.754130 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:48 crc kubenswrapper[4770]: I1209 15:32:48.754362 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:48 crc kubenswrapper[4770]: I1209 15:32:48.823841 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:49 crc kubenswrapper[4770]: I1209 15:32:49.754865 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:50 crc kubenswrapper[4770]: I1209 15:32:50.865419 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:50 crc kubenswrapper[4770]: I1209 15:32:50.865825 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:50 crc kubenswrapper[4770]: I1209 15:32:50.908217 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8hv2"] Dec 09 15:32:50 crc kubenswrapper[4770]: I1209 15:32:50.935562 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:51 crc kubenswrapper[4770]: I1209 15:32:51.725230 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d8hv2" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" containerName="registry-server" containerID="cri-o://e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6" gracePeriod=2 Dec 09 15:32:51 crc kubenswrapper[4770]: I1209 15:32:51.804114 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.421299 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.482360 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-catalog-content\") pod \"4fe338af-9416-496f-80e1-e99f1d317f17\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.482549 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-utilities\") pod \"4fe338af-9416-496f-80e1-e99f1d317f17\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.482609 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j79dz\" (UniqueName: \"kubernetes.io/projected/4fe338af-9416-496f-80e1-e99f1d317f17-kube-api-access-j79dz\") pod \"4fe338af-9416-496f-80e1-e99f1d317f17\" (UID: \"4fe338af-9416-496f-80e1-e99f1d317f17\") " Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.483425 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-utilities" (OuterVolumeSpecName: "utilities") pod "4fe338af-9416-496f-80e1-e99f1d317f17" (UID: "4fe338af-9416-496f-80e1-e99f1d317f17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.488898 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe338af-9416-496f-80e1-e99f1d317f17-kube-api-access-j79dz" (OuterVolumeSpecName: "kube-api-access-j79dz") pod "4fe338af-9416-496f-80e1-e99f1d317f17" (UID: "4fe338af-9416-496f-80e1-e99f1d317f17"). InnerVolumeSpecName "kube-api-access-j79dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.535252 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fe338af-9416-496f-80e1-e99f1d317f17" (UID: "4fe338af-9416-496f-80e1-e99f1d317f17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.584994 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.585026 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe338af-9416-496f-80e1-e99f1d317f17-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.585035 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j79dz\" (UniqueName: \"kubernetes.io/projected/4fe338af-9416-496f-80e1-e99f1d317f17-kube-api-access-j79dz\") on node \"crc\" DevicePath \"\"" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.737568 4770 generic.go:334] "Generic (PLEG): container finished" podID="4fe338af-9416-496f-80e1-e99f1d317f17" containerID="e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6" exitCode=0 Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.737647 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8hv2" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.737674 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8hv2" event={"ID":"4fe338af-9416-496f-80e1-e99f1d317f17","Type":"ContainerDied","Data":"e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6"} Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.737829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8hv2" event={"ID":"4fe338af-9416-496f-80e1-e99f1d317f17","Type":"ContainerDied","Data":"b44a90aac52ff9fbcfa8d45c6fb73d416bea56f93c04265548d8049150ad80c4"} Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.737857 4770 scope.go:117] "RemoveContainer" containerID="e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.766662 4770 scope.go:117] "RemoveContainer" containerID="0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.767303 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8hv2"] Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.779996 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d8hv2"] Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.794590 4770 scope.go:117] "RemoveContainer" containerID="1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.839487 4770 scope.go:117] "RemoveContainer" containerID="e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6" Dec 09 15:32:52 crc kubenswrapper[4770]: E1209 15:32:52.839970 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6\": container with ID starting with e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6 not found: ID does not exist" containerID="e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.840001 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6"} err="failed to get container status \"e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6\": rpc error: code = NotFound desc = could not find container \"e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6\": container with ID starting with e43eb84d6b46c98f553b91897d8167e67d177b08594ce2f9e9db56f6967614e6 not found: ID does not exist" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.840024 4770 scope.go:117] "RemoveContainer" containerID="0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0" Dec 09 15:32:52 crc kubenswrapper[4770]: E1209 15:32:52.840234 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0\": container with ID starting with 0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0 not found: ID does not exist" containerID="0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.840254 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0"} err="failed to get container status \"0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0\": rpc error: code = NotFound desc = could not find container \"0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0\": container with ID starting with 0bcef1e4b07ee18862ee761ad27fec899038ee257ac0bb5915b6dcf30a52b3a0 not found: ID does not exist" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.840266 4770 scope.go:117] "RemoveContainer" containerID="1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b" Dec 09 15:32:52 crc kubenswrapper[4770]: E1209 15:32:52.840440 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b\": container with ID starting with 1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b not found: ID does not exist" containerID="1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b" Dec 09 15:32:52 crc kubenswrapper[4770]: I1209 15:32:52.840459 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b"} err="failed to get container status \"1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b\": rpc error: code = NotFound desc = could not find container \"1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b\": container with ID starting with 1d94ad5b7326d7617991b0c60ef9d251f5e737bf7438161cb9501b764ebbf15b not found: ID does not exist" Dec 09 15:32:53 crc kubenswrapper[4770]: I1209 15:32:53.311665 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzhmd"] Dec 09 15:32:54 crc kubenswrapper[4770]: I1209 15:32:54.606995 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" path="/var/lib/kubelet/pods/4fe338af-9416-496f-80e1-e99f1d317f17/volumes" Dec 09 15:32:54 crc kubenswrapper[4770]: I1209 15:32:54.758551 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zzhmd" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerName="registry-server" containerID="cri-o://212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e" gracePeriod=2 Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.320067 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.445003 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-catalog-content\") pod \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.445117 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2ksz\" (UniqueName: \"kubernetes.io/projected/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-kube-api-access-d2ksz\") pod \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.445159 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-utilities\") pod \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\" (UID: \"6e7cb4fb-3562-4fae-95f1-081b2af95fe0\") " Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.446647 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-utilities" (OuterVolumeSpecName: "utilities") pod "6e7cb4fb-3562-4fae-95f1-081b2af95fe0" (UID: "6e7cb4fb-3562-4fae-95f1-081b2af95fe0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.456000 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-kube-api-access-d2ksz" (OuterVolumeSpecName: "kube-api-access-d2ksz") pod "6e7cb4fb-3562-4fae-95f1-081b2af95fe0" (UID: "6e7cb4fb-3562-4fae-95f1-081b2af95fe0"). InnerVolumeSpecName "kube-api-access-d2ksz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.463266 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e7cb4fb-3562-4fae-95f1-081b2af95fe0" (UID: "6e7cb4fb-3562-4fae-95f1-081b2af95fe0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.547752 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2ksz\" (UniqueName: \"kubernetes.io/projected/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-kube-api-access-d2ksz\") on node \"crc\" DevicePath \"\"" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.547793 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.547803 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7cb4fb-3562-4fae-95f1-081b2af95fe0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.772278 4770 generic.go:334] "Generic (PLEG): container finished" podID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerID="212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e" exitCode=0 Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.772334 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzhmd" event={"ID":"6e7cb4fb-3562-4fae-95f1-081b2af95fe0","Type":"ContainerDied","Data":"212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e"} Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.772364 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzhmd" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.772385 4770 scope.go:117] "RemoveContainer" containerID="212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.772372 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzhmd" event={"ID":"6e7cb4fb-3562-4fae-95f1-081b2af95fe0","Type":"ContainerDied","Data":"69aecfa635ae47395f2b11e6ea9f8bd14a8464d76e09912831435f069a6b1472"} Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.810127 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzhmd"] Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.814556 4770 scope.go:117] "RemoveContainer" containerID="65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.821684 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzhmd"] Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.848604 4770 scope.go:117] "RemoveContainer" containerID="b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.904210 4770 scope.go:117] "RemoveContainer" containerID="212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e" Dec 09 15:32:55 crc kubenswrapper[4770]: E1209 15:32:55.904893 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e\": container with ID starting with 212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e not found: ID does not exist" containerID="212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.904950 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e"} err="failed to get container status \"212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e\": rpc error: code = NotFound desc = could not find container \"212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e\": container with ID starting with 212d2e169a3b24b9a6e7e7dd8b6b494cce6ab38293c4d23fef8243a56d68164e not found: ID does not exist" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.904981 4770 scope.go:117] "RemoveContainer" containerID="65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac" Dec 09 15:32:55 crc kubenswrapper[4770]: E1209 15:32:55.905410 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac\": container with ID starting with 65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac not found: ID does not exist" containerID="65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.905457 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac"} err="failed to get container status \"65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac\": rpc error: code = NotFound desc = could not find container \"65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac\": container with ID starting with 65bed35906984adb461ead5bb4e379956424d5ff6bdfecda05b42ceade3f48ac not found: ID does not exist" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.905481 4770 scope.go:117] "RemoveContainer" containerID="b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d" Dec 09 15:32:55 crc kubenswrapper[4770]: E1209 15:32:55.906098 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d\": container with ID starting with b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d not found: ID does not exist" containerID="b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d" Dec 09 15:32:55 crc kubenswrapper[4770]: I1209 15:32:55.906135 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d"} err="failed to get container status \"b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d\": rpc error: code = NotFound desc = could not find container \"b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d\": container with ID starting with b4e697118df6187a9013acc4a67bfe7273e093d8e03c7e35880558bb0acf5f7d not found: ID does not exist" Dec 09 15:32:56 crc kubenswrapper[4770]: I1209 15:32:56.599707 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" path="/var/lib/kubelet/pods/6e7cb4fb-3562-4fae-95f1-081b2af95fe0/volumes" Dec 09 15:32:57 crc kubenswrapper[4770]: E1209 15:32:57.591275 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:33:00 crc kubenswrapper[4770]: E1209 15:33:00.590678 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:33:08 crc kubenswrapper[4770]: E1209 15:33:08.613377 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:33:13 crc kubenswrapper[4770]: E1209 15:33:13.690947 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:33:13 crc kubenswrapper[4770]: E1209 15:33:13.691577 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:33:13 crc kubenswrapper[4770]: E1209 15:33:13.691758 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:33:13 crc kubenswrapper[4770]: E1209 15:33:13.693188 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.048700 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5"] Dec 09 15:33:18 crc kubenswrapper[4770]: E1209 15:33:18.050069 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerName="extract-utilities" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.050091 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerName="extract-utilities" Dec 09 15:33:18 crc kubenswrapper[4770]: E1209 15:33:18.050122 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerName="registry-server" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.050133 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerName="registry-server" Dec 09 15:33:18 crc kubenswrapper[4770]: E1209 15:33:18.050168 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" containerName="extract-content" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.050180 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" containerName="extract-content" Dec 09 15:33:18 crc kubenswrapper[4770]: E1209 15:33:18.050202 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" containerName="extract-utilities" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.050231 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" containerName="extract-utilities" Dec 09 15:33:18 crc kubenswrapper[4770]: E1209 15:33:18.050250 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" containerName="registry-server" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.050261 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" containerName="registry-server" Dec 09 15:33:18 crc kubenswrapper[4770]: E1209 15:33:18.050285 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerName="extract-content" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.050298 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerName="extract-content" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.050629 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7cb4fb-3562-4fae-95f1-081b2af95fe0" containerName="registry-server" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.050675 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe338af-9416-496f-80e1-e99f1d317f17" containerName="registry-server" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.051998 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.056406 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.056482 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.056659 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.057767 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.092823 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5"] Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.227137 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.227537 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.227956 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97k2b\" (UniqueName: \"kubernetes.io/projected/93cfef83-a919-435a-88c7-b6f2b2a6c480-kube-api-access-97k2b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.330090 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.330566 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.330935 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97k2b\" (UniqueName: \"kubernetes.io/projected/93cfef83-a919-435a-88c7-b6f2b2a6c480-kube-api-access-97k2b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.337411 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.338646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.450785 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97k2b\" (UniqueName: \"kubernetes.io/projected/93cfef83-a919-435a-88c7-b6f2b2a6c480-kube-api-access-97k2b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:18 crc kubenswrapper[4770]: I1209 15:33:18.684469 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:33:19 crc kubenswrapper[4770]: I1209 15:33:19.212772 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5"] Dec 09 15:33:20 crc kubenswrapper[4770]: I1209 15:33:20.036682 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" event={"ID":"93cfef83-a919-435a-88c7-b6f2b2a6c480","Type":"ContainerStarted","Data":"2dffc4ee90d1ba00afc7f65902f12098ba68794e8224238463427aa9d48f0e70"} Dec 09 15:33:20 crc kubenswrapper[4770]: I1209 15:33:20.037248 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" event={"ID":"93cfef83-a919-435a-88c7-b6f2b2a6c480","Type":"ContainerStarted","Data":"2a01f3b47d03ffa530b206d7f97021b55ffdf2ae550740f31dc2f20ad820ccfa"} Dec 09 15:33:20 crc kubenswrapper[4770]: I1209 15:33:20.053871 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" podStartSLOduration=1.5558067389999999 podStartE2EDuration="2.053846566s" podCreationTimestamp="2025-12-09 15:33:18 +0000 UTC" firstStartedPulling="2025-12-09 15:33:19.222312509 +0000 UTC m=+4231.118514645" lastFinishedPulling="2025-12-09 15:33:19.720352336 +0000 UTC m=+4231.616554472" observedRunningTime="2025-12-09 15:33:20.051258456 +0000 UTC m=+4231.947460602" watchObservedRunningTime="2025-12-09 15:33:20.053846566 +0000 UTC m=+4231.950048712" Dec 09 15:33:21 crc kubenswrapper[4770]: E1209 15:33:21.590844 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:33:28 crc kubenswrapper[4770]: E1209 15:33:28.601608 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:33:35 crc kubenswrapper[4770]: E1209 15:33:35.592664 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:33:43 crc kubenswrapper[4770]: E1209 15:33:43.591900 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:33:49 crc kubenswrapper[4770]: E1209 15:33:49.591531 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:33:58 crc kubenswrapper[4770]: E1209 15:33:58.596137 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:34:01 crc kubenswrapper[4770]: E1209 15:34:01.590525 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:34:09 crc kubenswrapper[4770]: E1209 15:34:09.590835 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:34:13 crc kubenswrapper[4770]: E1209 15:34:13.591992 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:34:23 crc kubenswrapper[4770]: E1209 15:34:23.590437 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:34:24 crc kubenswrapper[4770]: E1209 15:34:24.589692 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:34:37 crc kubenswrapper[4770]: E1209 15:34:37.590599 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:34:38 crc kubenswrapper[4770]: E1209 15:34:38.600122 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:34:44 crc kubenswrapper[4770]: I1209 15:34:44.244091 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:34:44 crc kubenswrapper[4770]: I1209 15:34:44.245267 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:34:49 crc kubenswrapper[4770]: E1209 15:34:49.590928 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:34:50 crc kubenswrapper[4770]: E1209 15:34:50.589995 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:35:02 crc kubenswrapper[4770]: E1209 15:35:02.591163 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:35:03 crc kubenswrapper[4770]: E1209 15:35:03.590272 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:35:13 crc kubenswrapper[4770]: E1209 15:35:13.590348 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:35:14 crc kubenswrapper[4770]: I1209 15:35:14.243352 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:35:14 crc kubenswrapper[4770]: I1209 15:35:14.243417 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:35:14 crc kubenswrapper[4770]: E1209 15:35:14.590583 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:35:26 crc kubenswrapper[4770]: E1209 15:35:26.593257 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:35:27 crc kubenswrapper[4770]: E1209 15:35:27.589914 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:35:41 crc kubenswrapper[4770]: E1209 15:35:41.591080 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:35:41 crc kubenswrapper[4770]: E1209 15:35:41.591164 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.243479 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.243958 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.244031 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.245332 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.245430 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" gracePeriod=600 Dec 09 15:35:44 crc kubenswrapper[4770]: E1209 15:35:44.364536 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.587060 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" exitCode=0 Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.587163 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7"} Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.587451 4770 scope.go:117] "RemoveContainer" containerID="be8ecc0800bdb4292baf6436014d7cf958d7f7a64d5e6e70813f3c8b4b20ecbe" Dec 09 15:35:44 crc kubenswrapper[4770]: I1209 15:35:44.589274 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:35:44 crc kubenswrapper[4770]: E1209 15:35:44.589801 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:35:53 crc kubenswrapper[4770]: E1209 15:35:53.592448 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:35:55 crc kubenswrapper[4770]: E1209 15:35:55.590477 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:35:58 crc kubenswrapper[4770]: I1209 15:35:58.602286 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:35:58 crc kubenswrapper[4770]: E1209 15:35:58.603037 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:36:06 crc kubenswrapper[4770]: E1209 15:36:06.590768 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:36:06 crc kubenswrapper[4770]: E1209 15:36:06.591365 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:36:11 crc kubenswrapper[4770]: I1209 15:36:11.589301 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:36:11 crc kubenswrapper[4770]: E1209 15:36:11.590571 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:36:17 crc kubenswrapper[4770]: E1209 15:36:17.592646 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:36:20 crc kubenswrapper[4770]: E1209 15:36:20.591836 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:36:25 crc kubenswrapper[4770]: I1209 15:36:25.588250 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:36:25 crc kubenswrapper[4770]: E1209 15:36:25.589005 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:36:31 crc kubenswrapper[4770]: E1209 15:36:31.593492 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:36:34 crc kubenswrapper[4770]: E1209 15:36:34.600047 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:36:40 crc kubenswrapper[4770]: I1209 15:36:40.588379 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:36:40 crc kubenswrapper[4770]: E1209 15:36:40.589509 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:36:45 crc kubenswrapper[4770]: E1209 15:36:45.589714 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:36:48 crc kubenswrapper[4770]: E1209 15:36:48.598358 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:36:52 crc kubenswrapper[4770]: I1209 15:36:52.589302 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:36:52 crc kubenswrapper[4770]: E1209 15:36:52.590280 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:36:57 crc kubenswrapper[4770]: E1209 15:36:57.590871 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:37:02 crc kubenswrapper[4770]: E1209 15:37:02.591245 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:37:03 crc kubenswrapper[4770]: I1209 15:37:03.588983 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:37:03 crc kubenswrapper[4770]: E1209 15:37:03.589521 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:37:11 crc kubenswrapper[4770]: E1209 15:37:11.593030 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:37:14 crc kubenswrapper[4770]: I1209 15:37:14.588605 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:37:14 crc kubenswrapper[4770]: E1209 15:37:14.589498 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:37:16 crc kubenswrapper[4770]: E1209 15:37:16.591232 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:37:23 crc kubenswrapper[4770]: E1209 15:37:23.591009 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:37:25 crc kubenswrapper[4770]: I1209 15:37:25.610888 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-dccdd6975-6g8sl" podUID="e9e75e98-4fff-4755-9908-1e0d4ac982bb" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 09 15:37:27 crc kubenswrapper[4770]: I1209 15:37:27.588652 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:37:27 crc kubenswrapper[4770]: E1209 15:37:27.589242 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:37:29 crc kubenswrapper[4770]: E1209 15:37:29.590711 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:37:38 crc kubenswrapper[4770]: E1209 15:37:38.599856 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:37:41 crc kubenswrapper[4770]: E1209 15:37:41.589755 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:37:42 crc kubenswrapper[4770]: I1209 15:37:42.589977 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:37:42 crc kubenswrapper[4770]: E1209 15:37:42.590875 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:37:50 crc kubenswrapper[4770]: I1209 15:37:50.591889 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:37:50 crc kubenswrapper[4770]: E1209 15:37:50.716521 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:37:50 crc kubenswrapper[4770]: E1209 15:37:50.716936 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:37:50 crc kubenswrapper[4770]: E1209 15:37:50.717268 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:37:50 crc kubenswrapper[4770]: E1209 15:37:50.718647 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:37:52 crc kubenswrapper[4770]: E1209 15:37:52.590972 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:37:54 crc kubenswrapper[4770]: I1209 15:37:54.588451 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:37:54 crc kubenswrapper[4770]: E1209 15:37:54.589040 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:38:02 crc kubenswrapper[4770]: E1209 15:38:02.590430 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:38:03 crc kubenswrapper[4770]: E1209 15:38:03.590238 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:38:08 crc kubenswrapper[4770]: I1209 15:38:08.598230 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:38:08 crc kubenswrapper[4770]: E1209 15:38:08.599541 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:38:16 crc kubenswrapper[4770]: E1209 15:38:16.721618 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:38:16 crc kubenswrapper[4770]: E1209 15:38:16.722264 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:38:16 crc kubenswrapper[4770]: E1209 15:38:16.722434 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:38:16 crc kubenswrapper[4770]: E1209 15:38:16.723676 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:38:17 crc kubenswrapper[4770]: E1209 15:38:17.590305 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:38:23 crc kubenswrapper[4770]: I1209 15:38:23.588281 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:38:23 crc kubenswrapper[4770]: E1209 15:38:23.589160 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:38:29 crc kubenswrapper[4770]: E1209 15:38:29.590442 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:38:29 crc kubenswrapper[4770]: E1209 15:38:29.590563 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:38:38 crc kubenswrapper[4770]: I1209 15:38:38.603078 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:38:38 crc kubenswrapper[4770]: E1209 15:38:38.604142 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:38:42 crc kubenswrapper[4770]: E1209 15:38:42.592270 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:38:43 crc kubenswrapper[4770]: E1209 15:38:43.591558 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:38:50 crc kubenswrapper[4770]: I1209 15:38:50.588225 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:38:50 crc kubenswrapper[4770]: E1209 15:38:50.588979 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:38:54 crc kubenswrapper[4770]: E1209 15:38:54.591271 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:38:55 crc kubenswrapper[4770]: E1209 15:38:55.589977 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:39:01 crc kubenswrapper[4770]: I1209 15:39:01.588249 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:39:01 crc kubenswrapper[4770]: E1209 15:39:01.588971 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:39:06 crc kubenswrapper[4770]: E1209 15:39:06.599941 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:39:08 crc kubenswrapper[4770]: E1209 15:39:08.600131 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:39:15 crc kubenswrapper[4770]: I1209 15:39:15.588867 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:39:15 crc kubenswrapper[4770]: E1209 15:39:15.589672 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:39:19 crc kubenswrapper[4770]: E1209 15:39:19.590917 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:39:20 crc kubenswrapper[4770]: E1209 15:39:20.590137 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:39:30 crc kubenswrapper[4770]: I1209 15:39:30.588823 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:39:30 crc kubenswrapper[4770]: E1209 15:39:30.589712 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:39:30 crc kubenswrapper[4770]: E1209 15:39:30.595790 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:39:31 crc kubenswrapper[4770]: E1209 15:39:31.590278 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:39:39 crc kubenswrapper[4770]: I1209 15:39:39.209720 4770 generic.go:334] "Generic (PLEG): container finished" podID="93cfef83-a919-435a-88c7-b6f2b2a6c480" containerID="2dffc4ee90d1ba00afc7f65902f12098ba68794e8224238463427aa9d48f0e70" exitCode=2 Dec 09 15:39:39 crc kubenswrapper[4770]: I1209 15:39:39.209776 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" event={"ID":"93cfef83-a919-435a-88c7-b6f2b2a6c480","Type":"ContainerDied","Data":"2dffc4ee90d1ba00afc7f65902f12098ba68794e8224238463427aa9d48f0e70"} Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.737896 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.861401 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-ssh-key\") pod \"93cfef83-a919-435a-88c7-b6f2b2a6c480\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.861492 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-inventory\") pod \"93cfef83-a919-435a-88c7-b6f2b2a6c480\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.861637 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97k2b\" (UniqueName: \"kubernetes.io/projected/93cfef83-a919-435a-88c7-b6f2b2a6c480-kube-api-access-97k2b\") pod \"93cfef83-a919-435a-88c7-b6f2b2a6c480\" (UID: \"93cfef83-a919-435a-88c7-b6f2b2a6c480\") " Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.872032 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93cfef83-a919-435a-88c7-b6f2b2a6c480-kube-api-access-97k2b" (OuterVolumeSpecName: "kube-api-access-97k2b") pod "93cfef83-a919-435a-88c7-b6f2b2a6c480" (UID: "93cfef83-a919-435a-88c7-b6f2b2a6c480"). InnerVolumeSpecName "kube-api-access-97k2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.898501 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "93cfef83-a919-435a-88c7-b6f2b2a6c480" (UID: "93cfef83-a919-435a-88c7-b6f2b2a6c480"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.899124 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-inventory" (OuterVolumeSpecName: "inventory") pod "93cfef83-a919-435a-88c7-b6f2b2a6c480" (UID: "93cfef83-a919-435a-88c7-b6f2b2a6c480"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.965319 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.965412 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfef83-a919-435a-88c7-b6f2b2a6c480-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 15:39:40 crc kubenswrapper[4770]: I1209 15:39:40.965433 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97k2b\" (UniqueName: \"kubernetes.io/projected/93cfef83-a919-435a-88c7-b6f2b2a6c480-kube-api-access-97k2b\") on node \"crc\" DevicePath \"\"" Dec 09 15:39:41 crc kubenswrapper[4770]: I1209 15:39:41.232475 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" event={"ID":"93cfef83-a919-435a-88c7-b6f2b2a6c480","Type":"ContainerDied","Data":"2a01f3b47d03ffa530b206d7f97021b55ffdf2ae550740f31dc2f20ad820ccfa"} Dec 09 15:39:41 crc kubenswrapper[4770]: I1209 15:39:41.232522 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a01f3b47d03ffa530b206d7f97021b55ffdf2ae550740f31dc2f20ad820ccfa" Dec 09 15:39:41 crc kubenswrapper[4770]: I1209 15:39:41.232570 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5" Dec 09 15:39:44 crc kubenswrapper[4770]: I1209 15:39:44.589260 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:39:44 crc kubenswrapper[4770]: E1209 15:39:44.590086 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:39:44 crc kubenswrapper[4770]: E1209 15:39:44.593173 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:39:45 crc kubenswrapper[4770]: E1209 15:39:45.590288 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:39:57 crc kubenswrapper[4770]: E1209 15:39:57.590126 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:39:58 crc kubenswrapper[4770]: I1209 15:39:58.599668 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:39:58 crc kubenswrapper[4770]: E1209 15:39:58.600035 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:40:00 crc kubenswrapper[4770]: E1209 15:40:00.590850 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:40:11 crc kubenswrapper[4770]: I1209 15:40:11.588985 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:40:11 crc kubenswrapper[4770]: E1209 15:40:11.589830 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:40:11 crc kubenswrapper[4770]: E1209 15:40:11.597216 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:40:11 crc kubenswrapper[4770]: E1209 15:40:11.597230 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:40:14 crc kubenswrapper[4770]: I1209 15:40:14.877841 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fkkdn"] Dec 09 15:40:14 crc kubenswrapper[4770]: E1209 15:40:14.879489 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93cfef83-a919-435a-88c7-b6f2b2a6c480" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:40:14 crc kubenswrapper[4770]: I1209 15:40:14.879524 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="93cfef83-a919-435a-88c7-b6f2b2a6c480" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:40:14 crc kubenswrapper[4770]: I1209 15:40:14.879868 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="93cfef83-a919-435a-88c7-b6f2b2a6c480" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:40:14 crc kubenswrapper[4770]: I1209 15:40:14.881841 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:14 crc kubenswrapper[4770]: I1209 15:40:14.930559 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkkdn"] Dec 09 15:40:14 crc kubenswrapper[4770]: I1209 15:40:14.970742 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-catalog-content\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:14 crc kubenswrapper[4770]: I1209 15:40:14.970813 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-utilities\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:14 crc kubenswrapper[4770]: I1209 15:40:14.970968 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9nbt\" (UniqueName: \"kubernetes.io/projected/f7c51411-5626-4135-b550-ccdd29c3212d-kube-api-access-r9nbt\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:15 crc kubenswrapper[4770]: I1209 15:40:15.073107 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-catalog-content\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:15 crc kubenswrapper[4770]: I1209 15:40:15.073174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-utilities\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:15 crc kubenswrapper[4770]: I1209 15:40:15.073222 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9nbt\" (UniqueName: \"kubernetes.io/projected/f7c51411-5626-4135-b550-ccdd29c3212d-kube-api-access-r9nbt\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:15 crc kubenswrapper[4770]: I1209 15:40:15.074446 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-catalog-content\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:15 crc kubenswrapper[4770]: I1209 15:40:15.074541 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-utilities\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:15 crc kubenswrapper[4770]: I1209 15:40:15.094767 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9nbt\" (UniqueName: \"kubernetes.io/projected/f7c51411-5626-4135-b550-ccdd29c3212d-kube-api-access-r9nbt\") pod \"community-operators-fkkdn\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:15 crc kubenswrapper[4770]: I1209 15:40:15.209619 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:15 crc kubenswrapper[4770]: I1209 15:40:15.787604 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkkdn"] Dec 09 15:40:16 crc kubenswrapper[4770]: I1209 15:40:16.657291 4770 generic.go:334] "Generic (PLEG): container finished" podID="f7c51411-5626-4135-b550-ccdd29c3212d" containerID="db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811" exitCode=0 Dec 09 15:40:16 crc kubenswrapper[4770]: I1209 15:40:16.657364 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkkdn" event={"ID":"f7c51411-5626-4135-b550-ccdd29c3212d","Type":"ContainerDied","Data":"db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811"} Dec 09 15:40:16 crc kubenswrapper[4770]: I1209 15:40:16.657684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkkdn" event={"ID":"f7c51411-5626-4135-b550-ccdd29c3212d","Type":"ContainerStarted","Data":"4dfbf2c4340b9f08eabb3c208dfd285c333ce278cec576e66cc6bcec4affcf80"} Dec 09 15:40:17 crc kubenswrapper[4770]: I1209 15:40:17.668819 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkkdn" event={"ID":"f7c51411-5626-4135-b550-ccdd29c3212d","Type":"ContainerStarted","Data":"9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e"} Dec 09 15:40:18 crc kubenswrapper[4770]: I1209 15:40:18.682789 4770 generic.go:334] "Generic (PLEG): container finished" podID="f7c51411-5626-4135-b550-ccdd29c3212d" containerID="9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e" exitCode=0 Dec 09 15:40:18 crc kubenswrapper[4770]: I1209 15:40:18.682928 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkkdn" event={"ID":"f7c51411-5626-4135-b550-ccdd29c3212d","Type":"ContainerDied","Data":"9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e"} Dec 09 15:40:19 crc kubenswrapper[4770]: I1209 15:40:19.694215 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkkdn" event={"ID":"f7c51411-5626-4135-b550-ccdd29c3212d","Type":"ContainerStarted","Data":"9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85"} Dec 09 15:40:19 crc kubenswrapper[4770]: I1209 15:40:19.724613 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fkkdn" podStartSLOduration=3.14676548 podStartE2EDuration="5.724487535s" podCreationTimestamp="2025-12-09 15:40:14 +0000 UTC" firstStartedPulling="2025-12-09 15:40:16.659131946 +0000 UTC m=+4648.555334082" lastFinishedPulling="2025-12-09 15:40:19.236854011 +0000 UTC m=+4651.133056137" observedRunningTime="2025-12-09 15:40:19.712868406 +0000 UTC m=+4651.609070572" watchObservedRunningTime="2025-12-09 15:40:19.724487535 +0000 UTC m=+4651.620689671" Dec 09 15:40:24 crc kubenswrapper[4770]: E1209 15:40:24.591395 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:40:25 crc kubenswrapper[4770]: I1209 15:40:25.210821 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:25 crc kubenswrapper[4770]: I1209 15:40:25.210859 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:25 crc kubenswrapper[4770]: I1209 15:40:25.265556 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:25 crc kubenswrapper[4770]: I1209 15:40:25.588530 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:40:25 crc kubenswrapper[4770]: E1209 15:40:25.589186 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:40:25 crc kubenswrapper[4770]: I1209 15:40:25.815965 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:25 crc kubenswrapper[4770]: I1209 15:40:25.875031 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fkkdn"] Dec 09 15:40:26 crc kubenswrapper[4770]: E1209 15:40:26.599956 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:40:27 crc kubenswrapper[4770]: I1209 15:40:27.773968 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fkkdn" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" containerName="registry-server" containerID="cri-o://9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85" gracePeriod=2 Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.286073 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.381257 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-utilities\") pod \"f7c51411-5626-4135-b550-ccdd29c3212d\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.381407 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9nbt\" (UniqueName: \"kubernetes.io/projected/f7c51411-5626-4135-b550-ccdd29c3212d-kube-api-access-r9nbt\") pod \"f7c51411-5626-4135-b550-ccdd29c3212d\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.381482 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-catalog-content\") pod \"f7c51411-5626-4135-b550-ccdd29c3212d\" (UID: \"f7c51411-5626-4135-b550-ccdd29c3212d\") " Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.382438 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-utilities" (OuterVolumeSpecName: "utilities") pod "f7c51411-5626-4135-b550-ccdd29c3212d" (UID: "f7c51411-5626-4135-b550-ccdd29c3212d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.392059 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c51411-5626-4135-b550-ccdd29c3212d-kube-api-access-r9nbt" (OuterVolumeSpecName: "kube-api-access-r9nbt") pod "f7c51411-5626-4135-b550-ccdd29c3212d" (UID: "f7c51411-5626-4135-b550-ccdd29c3212d"). InnerVolumeSpecName "kube-api-access-r9nbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.436419 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7c51411-5626-4135-b550-ccdd29c3212d" (UID: "f7c51411-5626-4135-b550-ccdd29c3212d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.484860 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.484905 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9nbt\" (UniqueName: \"kubernetes.io/projected/f7c51411-5626-4135-b550-ccdd29c3212d-kube-api-access-r9nbt\") on node \"crc\" DevicePath \"\"" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.484920 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c51411-5626-4135-b550-ccdd29c3212d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.786911 4770 generic.go:334] "Generic (PLEG): container finished" podID="f7c51411-5626-4135-b550-ccdd29c3212d" containerID="9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85" exitCode=0 Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.786969 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkkdn" event={"ID":"f7c51411-5626-4135-b550-ccdd29c3212d","Type":"ContainerDied","Data":"9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85"} Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.787265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkkdn" event={"ID":"f7c51411-5626-4135-b550-ccdd29c3212d","Type":"ContainerDied","Data":"4dfbf2c4340b9f08eabb3c208dfd285c333ce278cec576e66cc6bcec4affcf80"} Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.787290 4770 scope.go:117] "RemoveContainer" containerID="9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.787026 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkkdn" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.811532 4770 scope.go:117] "RemoveContainer" containerID="9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.824317 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fkkdn"] Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.834447 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fkkdn"] Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.835858 4770 scope.go:117] "RemoveContainer" containerID="db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.878739 4770 scope.go:117] "RemoveContainer" containerID="9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85" Dec 09 15:40:28 crc kubenswrapper[4770]: E1209 15:40:28.879253 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85\": container with ID starting with 9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85 not found: ID does not exist" containerID="9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.879306 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85"} err="failed to get container status \"9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85\": rpc error: code = NotFound desc = could not find container \"9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85\": container with ID starting with 9807f782882434bb313104b786b8348030129c6a647e32ce121a50ece1cf9d85 not found: ID does not exist" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.879345 4770 scope.go:117] "RemoveContainer" containerID="9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e" Dec 09 15:40:28 crc kubenswrapper[4770]: E1209 15:40:28.879703 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e\": container with ID starting with 9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e not found: ID does not exist" containerID="9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.879760 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e"} err="failed to get container status \"9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e\": rpc error: code = NotFound desc = could not find container \"9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e\": container with ID starting with 9ba68e2868dc1aaa1b9aa4e3a0878155a30a2e961a65f84341f0f31f03ae9a1e not found: ID does not exist" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.879790 4770 scope.go:117] "RemoveContainer" containerID="db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811" Dec 09 15:40:28 crc kubenswrapper[4770]: E1209 15:40:28.881962 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811\": container with ID starting with db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811 not found: ID does not exist" containerID="db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811" Dec 09 15:40:28 crc kubenswrapper[4770]: I1209 15:40:28.882026 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811"} err="failed to get container status \"db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811\": rpc error: code = NotFound desc = could not find container \"db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811\": container with ID starting with db143bed048a755b71161aff0921df44c7767280935962f8b77d0178a749f811 not found: ID does not exist" Dec 09 15:40:30 crc kubenswrapper[4770]: I1209 15:40:30.601000 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" path="/var/lib/kubelet/pods/f7c51411-5626-4135-b550-ccdd29c3212d/volumes" Dec 09 15:40:37 crc kubenswrapper[4770]: I1209 15:40:37.588163 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:40:37 crc kubenswrapper[4770]: E1209 15:40:37.588988 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:40:38 crc kubenswrapper[4770]: E1209 15:40:38.599955 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:40:40 crc kubenswrapper[4770]: E1209 15:40:40.589886 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:40:50 crc kubenswrapper[4770]: E1209 15:40:50.591881 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:40:51 crc kubenswrapper[4770]: I1209 15:40:51.588933 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:40:51 crc kubenswrapper[4770]: E1209 15:40:51.590840 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:40:52 crc kubenswrapper[4770]: I1209 15:40:52.013355 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"1927e27317d66a915b08addb11dfd6d01110ef294a4200d4cc67b6485ff2f786"} Dec 09 15:41:04 crc kubenswrapper[4770]: E1209 15:41:04.601072 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:41:06 crc kubenswrapper[4770]: E1209 15:41:06.590875 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:41:17 crc kubenswrapper[4770]: E1209 15:41:17.591983 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:41:21 crc kubenswrapper[4770]: E1209 15:41:21.593594 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:41:28 crc kubenswrapper[4770]: E1209 15:41:28.599105 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:41:36 crc kubenswrapper[4770]: E1209 15:41:36.591095 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:41:39 crc kubenswrapper[4770]: E1209 15:41:39.592398 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:41:50 crc kubenswrapper[4770]: E1209 15:41:50.592826 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:41:52 crc kubenswrapper[4770]: E1209 15:41:52.591948 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:42:05 crc kubenswrapper[4770]: E1209 15:42:05.590881 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:42:06 crc kubenswrapper[4770]: E1209 15:42:06.590972 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:42:17 crc kubenswrapper[4770]: E1209 15:42:17.592888 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:42:20 crc kubenswrapper[4770]: E1209 15:42:20.591131 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:42:30 crc kubenswrapper[4770]: E1209 15:42:30.593144 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:42:34 crc kubenswrapper[4770]: E1209 15:42:34.591143 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:42:41 crc kubenswrapper[4770]: E1209 15:42:41.589710 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.121948 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rw2sd"] Dec 09 15:42:42 crc kubenswrapper[4770]: E1209 15:42:42.122998 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" containerName="extract-utilities" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.123289 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" containerName="extract-utilities" Dec 09 15:42:42 crc kubenswrapper[4770]: E1209 15:42:42.123360 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" containerName="registry-server" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.123374 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" containerName="registry-server" Dec 09 15:42:42 crc kubenswrapper[4770]: E1209 15:42:42.123398 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" containerName="extract-content" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.123410 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" containerName="extract-content" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.125718 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c51411-5626-4135-b550-ccdd29c3212d" containerName="registry-server" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.128393 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.134467 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rw2sd"] Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.242665 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf9zc\" (UniqueName: \"kubernetes.io/projected/c53d24ec-c65b-4470-a7b5-e9b660187b28-kube-api-access-nf9zc\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.242919 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-catalog-content\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.243172 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-utilities\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.345114 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-utilities\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.345176 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf9zc\" (UniqueName: \"kubernetes.io/projected/c53d24ec-c65b-4470-a7b5-e9b660187b28-kube-api-access-nf9zc\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.345253 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-catalog-content\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.345807 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-catalog-content\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.345945 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-utilities\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.364358 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf9zc\" (UniqueName: \"kubernetes.io/projected/c53d24ec-c65b-4470-a7b5-e9b660187b28-kube-api-access-nf9zc\") pod \"certified-operators-rw2sd\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.466081 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:42 crc kubenswrapper[4770]: I1209 15:42:42.967294 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rw2sd"] Dec 09 15:42:43 crc kubenswrapper[4770]: I1209 15:42:43.411452 4770 generic.go:334] "Generic (PLEG): container finished" podID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerID="e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127" exitCode=0 Dec 09 15:42:43 crc kubenswrapper[4770]: I1209 15:42:43.411510 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rw2sd" event={"ID":"c53d24ec-c65b-4470-a7b5-e9b660187b28","Type":"ContainerDied","Data":"e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127"} Dec 09 15:42:43 crc kubenswrapper[4770]: I1209 15:42:43.411570 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rw2sd" event={"ID":"c53d24ec-c65b-4470-a7b5-e9b660187b28","Type":"ContainerStarted","Data":"a2cc01846b63fdfdaabecc78614c116853adf524b50f0f88cd92f308a03fd7c1"} Dec 09 15:42:45 crc kubenswrapper[4770]: I1209 15:42:45.483070 4770 generic.go:334] "Generic (PLEG): container finished" podID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerID="61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf" exitCode=0 Dec 09 15:42:45 crc kubenswrapper[4770]: I1209 15:42:45.483168 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rw2sd" event={"ID":"c53d24ec-c65b-4470-a7b5-e9b660187b28","Type":"ContainerDied","Data":"61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf"} Dec 09 15:42:45 crc kubenswrapper[4770]: E1209 15:42:45.589282 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:42:46 crc kubenswrapper[4770]: I1209 15:42:46.501453 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rw2sd" event={"ID":"c53d24ec-c65b-4470-a7b5-e9b660187b28","Type":"ContainerStarted","Data":"02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61"} Dec 09 15:42:52 crc kubenswrapper[4770]: I1209 15:42:52.466641 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:52 crc kubenswrapper[4770]: I1209 15:42:52.467677 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:52 crc kubenswrapper[4770]: I1209 15:42:52.533957 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:52 crc kubenswrapper[4770]: I1209 15:42:52.557449 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rw2sd" podStartSLOduration=8.017797642 podStartE2EDuration="10.557391783s" podCreationTimestamp="2025-12-09 15:42:42 +0000 UTC" firstStartedPulling="2025-12-09 15:42:43.41362292 +0000 UTC m=+4795.309825056" lastFinishedPulling="2025-12-09 15:42:45.953217061 +0000 UTC m=+4797.849419197" observedRunningTime="2025-12-09 15:42:46.522115263 +0000 UTC m=+4798.418317399" watchObservedRunningTime="2025-12-09 15:42:52.557391783 +0000 UTC m=+4804.453593909" Dec 09 15:42:52 crc kubenswrapper[4770]: I1209 15:42:52.630621 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:52 crc kubenswrapper[4770]: I1209 15:42:52.800523 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rw2sd"] Dec 09 15:42:54 crc kubenswrapper[4770]: I1209 15:42:54.597447 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rw2sd" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerName="registry-server" containerID="cri-o://02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61" gracePeriod=2 Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.114142 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.174862 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf9zc\" (UniqueName: \"kubernetes.io/projected/c53d24ec-c65b-4470-a7b5-e9b660187b28-kube-api-access-nf9zc\") pod \"c53d24ec-c65b-4470-a7b5-e9b660187b28\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.174991 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-catalog-content\") pod \"c53d24ec-c65b-4470-a7b5-e9b660187b28\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.175134 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-utilities\") pod \"c53d24ec-c65b-4470-a7b5-e9b660187b28\" (UID: \"c53d24ec-c65b-4470-a7b5-e9b660187b28\") " Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.176421 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-utilities" (OuterVolumeSpecName: "utilities") pod "c53d24ec-c65b-4470-a7b5-e9b660187b28" (UID: "c53d24ec-c65b-4470-a7b5-e9b660187b28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.183371 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53d24ec-c65b-4470-a7b5-e9b660187b28-kube-api-access-nf9zc" (OuterVolumeSpecName: "kube-api-access-nf9zc") pod "c53d24ec-c65b-4470-a7b5-e9b660187b28" (UID: "c53d24ec-c65b-4470-a7b5-e9b660187b28"). InnerVolumeSpecName "kube-api-access-nf9zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.277370 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.277406 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf9zc\" (UniqueName: \"kubernetes.io/projected/c53d24ec-c65b-4470-a7b5-e9b660187b28-kube-api-access-nf9zc\") on node \"crc\" DevicePath \"\"" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.328063 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c53d24ec-c65b-4470-a7b5-e9b660187b28" (UID: "c53d24ec-c65b-4470-a7b5-e9b660187b28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.379174 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53d24ec-c65b-4470-a7b5-e9b660187b28-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.608245 4770 generic.go:334] "Generic (PLEG): container finished" podID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerID="02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61" exitCode=0 Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.608301 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rw2sd" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.608319 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rw2sd" event={"ID":"c53d24ec-c65b-4470-a7b5-e9b660187b28","Type":"ContainerDied","Data":"02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61"} Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.608711 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rw2sd" event={"ID":"c53d24ec-c65b-4470-a7b5-e9b660187b28","Type":"ContainerDied","Data":"a2cc01846b63fdfdaabecc78614c116853adf524b50f0f88cd92f308a03fd7c1"} Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.608821 4770 scope.go:117] "RemoveContainer" containerID="02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.631864 4770 scope.go:117] "RemoveContainer" containerID="61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.650005 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rw2sd"] Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.660479 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rw2sd"] Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.667006 4770 scope.go:117] "RemoveContainer" containerID="e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.709067 4770 scope.go:117] "RemoveContainer" containerID="02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61" Dec 09 15:42:55 crc kubenswrapper[4770]: E1209 15:42:55.709698 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61\": container with ID starting with 02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61 not found: ID does not exist" containerID="02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.709757 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61"} err="failed to get container status \"02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61\": rpc error: code = NotFound desc = could not find container \"02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61\": container with ID starting with 02dd42f9de646fb5c55c49bc4fd2abf975be4f2d1a9429771f0783cb01218b61 not found: ID does not exist" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.709784 4770 scope.go:117] "RemoveContainer" containerID="61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf" Dec 09 15:42:55 crc kubenswrapper[4770]: E1209 15:42:55.710159 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf\": container with ID starting with 61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf not found: ID does not exist" containerID="61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.710220 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf"} err="failed to get container status \"61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf\": rpc error: code = NotFound desc = could not find container \"61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf\": container with ID starting with 61afdfc4cbb08eb02c9d9786c1a29629cea78cfb5b9f027ad1bc07d15ac267bf not found: ID does not exist" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.710239 4770 scope.go:117] "RemoveContainer" containerID="e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127" Dec 09 15:42:55 crc kubenswrapper[4770]: E1209 15:42:55.710540 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127\": container with ID starting with e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127 not found: ID does not exist" containerID="e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127" Dec 09 15:42:55 crc kubenswrapper[4770]: I1209 15:42:55.710597 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127"} err="failed to get container status \"e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127\": rpc error: code = NotFound desc = could not find container \"e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127\": container with ID starting with e4f18f74e5ef6f54587c9af76d6cd88d205afd46989b601f402f0c49d7aff127 not found: ID does not exist" Dec 09 15:42:56 crc kubenswrapper[4770]: I1209 15:42:56.590489 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:42:56 crc kubenswrapper[4770]: I1209 15:42:56.603966 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" path="/var/lib/kubelet/pods/c53d24ec-c65b-4470-a7b5-e9b660187b28/volumes" Dec 09 15:42:56 crc kubenswrapper[4770]: E1209 15:42:56.719611 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:42:56 crc kubenswrapper[4770]: E1209 15:42:56.719696 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:42:56 crc kubenswrapper[4770]: E1209 15:42:56.719945 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:42:56 crc kubenswrapper[4770]: E1209 15:42:56.721175 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:42:57 crc kubenswrapper[4770]: E1209 15:42:57.591766 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:43:09 crc kubenswrapper[4770]: E1209 15:43:09.590009 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:43:09 crc kubenswrapper[4770]: E1209 15:43:09.590092 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:43:14 crc kubenswrapper[4770]: I1209 15:43:14.244165 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:43:14 crc kubenswrapper[4770]: I1209 15:43:14.244806 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:43:20 crc kubenswrapper[4770]: E1209 15:43:20.590395 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:43:23 crc kubenswrapper[4770]: E1209 15:43:23.720327 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:43:23 crc kubenswrapper[4770]: E1209 15:43:23.721027 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:43:23 crc kubenswrapper[4770]: E1209 15:43:23.721211 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:43:23 crc kubenswrapper[4770]: E1209 15:43:23.722644 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:43:33 crc kubenswrapper[4770]: E1209 15:43:33.592394 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:43:38 crc kubenswrapper[4770]: E1209 15:43:38.601059 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:43:44 crc kubenswrapper[4770]: I1209 15:43:44.243985 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:43:44 crc kubenswrapper[4770]: I1209 15:43:44.244655 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:43:46 crc kubenswrapper[4770]: E1209 15:43:46.590217 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:43:53 crc kubenswrapper[4770]: E1209 15:43:53.590855 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:43:59 crc kubenswrapper[4770]: E1209 15:43:59.590660 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:44:04 crc kubenswrapper[4770]: E1209 15:44:04.610251 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:44:11 crc kubenswrapper[4770]: E1209 15:44:11.590276 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:44:14 crc kubenswrapper[4770]: I1209 15:44:14.243274 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:44:14 crc kubenswrapper[4770]: I1209 15:44:14.243817 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:44:14 crc kubenswrapper[4770]: I1209 15:44:14.243873 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:44:14 crc kubenswrapper[4770]: I1209 15:44:14.244450 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1927e27317d66a915b08addb11dfd6d01110ef294a4200d4cc67b6485ff2f786"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:44:14 crc kubenswrapper[4770]: I1209 15:44:14.244519 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://1927e27317d66a915b08addb11dfd6d01110ef294a4200d4cc67b6485ff2f786" gracePeriod=600 Dec 09 15:44:14 crc kubenswrapper[4770]: I1209 15:44:14.563115 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="1927e27317d66a915b08addb11dfd6d01110ef294a4200d4cc67b6485ff2f786" exitCode=0 Dec 09 15:44:14 crc kubenswrapper[4770]: I1209 15:44:14.563187 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"1927e27317d66a915b08addb11dfd6d01110ef294a4200d4cc67b6485ff2f786"} Dec 09 15:44:14 crc kubenswrapper[4770]: I1209 15:44:14.563467 4770 scope.go:117] "RemoveContainer" containerID="ceb4dd35a2621fd8d99ce9044415bda4160f46b7aba79edb1eb36123b843e3a7" Dec 09 15:44:15 crc kubenswrapper[4770]: I1209 15:44:15.574588 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8"} Dec 09 15:44:16 crc kubenswrapper[4770]: E1209 15:44:16.590801 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:44:22 crc kubenswrapper[4770]: E1209 15:44:22.594841 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:44:27 crc kubenswrapper[4770]: E1209 15:44:27.591953 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:44:35 crc kubenswrapper[4770]: E1209 15:44:35.591217 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:44:39 crc kubenswrapper[4770]: E1209 15:44:39.591485 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:44:46 crc kubenswrapper[4770]: E1209 15:44:46.590781 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:44:46 crc kubenswrapper[4770]: I1209 15:44:46.932036 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-t2q8w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 15:44:46 crc kubenswrapper[4770]: I1209 15:44:46.932330 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" podUID="639ac3bd-8610-4f95-98f8-ad53a5c0d1fd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 15:44:46 crc kubenswrapper[4770]: I1209 15:44:46.932174 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-t2q8w container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 15:44:46 crc kubenswrapper[4770]: I1209 15:44:46.932385 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-t2q8w" podUID="639ac3bd-8610-4f95-98f8-ad53a5c0d1fd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 15:44:50 crc kubenswrapper[4770]: E1209 15:44:50.590997 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:44:57 crc kubenswrapper[4770]: E1209 15:44:57.590920 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.032856 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q"] Dec 09 15:44:58 crc kubenswrapper[4770]: E1209 15:44:58.033414 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerName="extract-content" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.033441 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerName="extract-content" Dec 09 15:44:58 crc kubenswrapper[4770]: E1209 15:44:58.033466 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerName="registry-server" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.033474 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerName="registry-server" Dec 09 15:44:58 crc kubenswrapper[4770]: E1209 15:44:58.033498 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerName="extract-utilities" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.033505 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerName="extract-utilities" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.033791 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53d24ec-c65b-4470-a7b5-e9b660187b28" containerName="registry-server" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.034612 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.037249 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.037631 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nncqh" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.039461 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.047881 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.048698 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q"] Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.141373 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.141513 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.141554 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-492vq\" (UniqueName: \"kubernetes.io/projected/aa405f9b-e080-47ea-8728-509d0244f8d7-kube-api-access-492vq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.243537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.243653 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-492vq\" (UniqueName: \"kubernetes.io/projected/aa405f9b-e080-47ea-8728-509d0244f8d7-kube-api-access-492vq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.243900 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.252518 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.253145 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.264854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-492vq\" (UniqueName: \"kubernetes.io/projected/aa405f9b-e080-47ea-8728-509d0244f8d7-kube-api-access-492vq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.365848 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:44:58 crc kubenswrapper[4770]: I1209 15:44:58.934086 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q"] Dec 09 15:44:59 crc kubenswrapper[4770]: W1209 15:44:59.573327 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa405f9b_e080_47ea_8728_509d0244f8d7.slice/crio-89c304c9e661c5c9ff10bb58e330f72f4a74a1b317ed948e630b2539922e8506 WatchSource:0}: Error finding container 89c304c9e661c5c9ff10bb58e330f72f4a74a1b317ed948e630b2539922e8506: Status 404 returned error can't find the container with id 89c304c9e661c5c9ff10bb58e330f72f4a74a1b317ed948e630b2539922e8506 Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.094954 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" event={"ID":"aa405f9b-e080-47ea-8728-509d0244f8d7","Type":"ContainerStarted","Data":"89c304c9e661c5c9ff10bb58e330f72f4a74a1b317ed948e630b2539922e8506"} Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.149230 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk"] Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.150774 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.153331 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.154718 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.167810 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk"] Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.293502 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f6e4cda-7390-4885-b482-985caa9a726d-secret-volume\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.293653 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnxn\" (UniqueName: \"kubernetes.io/projected/3f6e4cda-7390-4885-b482-985caa9a726d-kube-api-access-5vnxn\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.293740 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f6e4cda-7390-4885-b482-985caa9a726d-config-volume\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.396629 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnxn\" (UniqueName: \"kubernetes.io/projected/3f6e4cda-7390-4885-b482-985caa9a726d-kube-api-access-5vnxn\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.397099 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f6e4cda-7390-4885-b482-985caa9a726d-config-volume\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.397288 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f6e4cda-7390-4885-b482-985caa9a726d-secret-volume\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.398565 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f6e4cda-7390-4885-b482-985caa9a726d-config-volume\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.409344 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f6e4cda-7390-4885-b482-985caa9a726d-secret-volume\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.417640 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnxn\" (UniqueName: \"kubernetes.io/projected/3f6e4cda-7390-4885-b482-985caa9a726d-kube-api-access-5vnxn\") pod \"collect-profiles-29421585-swlrk\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:00 crc kubenswrapper[4770]: I1209 15:45:00.528598 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:01 crc kubenswrapper[4770]: I1209 15:45:01.001419 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk"] Dec 09 15:45:01 crc kubenswrapper[4770]: I1209 15:45:01.109094 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" event={"ID":"3f6e4cda-7390-4885-b482-985caa9a726d","Type":"ContainerStarted","Data":"1e09eb71fb5910adb1a67da852c6ef88674c2b0b9815901b6c0742d8158f72dd"} Dec 09 15:45:01 crc kubenswrapper[4770]: I1209 15:45:01.111549 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" event={"ID":"aa405f9b-e080-47ea-8728-509d0244f8d7","Type":"ContainerStarted","Data":"e02fba805f539e75da6f4ffd159c8f5be578b3a149eae4d6205e3db213073e64"} Dec 09 15:45:01 crc kubenswrapper[4770]: I1209 15:45:01.134884 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" podStartSLOduration=2.662567611 podStartE2EDuration="3.134855813s" podCreationTimestamp="2025-12-09 15:44:58 +0000 UTC" firstStartedPulling="2025-12-09 15:44:59.576623708 +0000 UTC m=+4931.472825844" lastFinishedPulling="2025-12-09 15:45:00.04891191 +0000 UTC m=+4931.945114046" observedRunningTime="2025-12-09 15:45:01.130901425 +0000 UTC m=+4933.027103581" watchObservedRunningTime="2025-12-09 15:45:01.134855813 +0000 UTC m=+4933.031057949" Dec 09 15:45:02 crc kubenswrapper[4770]: I1209 15:45:02.127482 4770 generic.go:334] "Generic (PLEG): container finished" podID="3f6e4cda-7390-4885-b482-985caa9a726d" containerID="dab5c1d2a8dc59b31bf63eb21abf2f3c7bc94175ce238065212c5879ab4522f8" exitCode=0 Dec 09 15:45:02 crc kubenswrapper[4770]: I1209 15:45:02.127566 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" event={"ID":"3f6e4cda-7390-4885-b482-985caa9a726d","Type":"ContainerDied","Data":"dab5c1d2a8dc59b31bf63eb21abf2f3c7bc94175ce238065212c5879ab4522f8"} Dec 09 15:45:02 crc kubenswrapper[4770]: E1209 15:45:02.590919 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.549723 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.720662 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f6e4cda-7390-4885-b482-985caa9a726d-config-volume\") pod \"3f6e4cda-7390-4885-b482-985caa9a726d\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.720761 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vnxn\" (UniqueName: \"kubernetes.io/projected/3f6e4cda-7390-4885-b482-985caa9a726d-kube-api-access-5vnxn\") pod \"3f6e4cda-7390-4885-b482-985caa9a726d\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.720816 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f6e4cda-7390-4885-b482-985caa9a726d-secret-volume\") pod \"3f6e4cda-7390-4885-b482-985caa9a726d\" (UID: \"3f6e4cda-7390-4885-b482-985caa9a726d\") " Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.723687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f6e4cda-7390-4885-b482-985caa9a726d-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f6e4cda-7390-4885-b482-985caa9a726d" (UID: "3f6e4cda-7390-4885-b482-985caa9a726d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.728344 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f6e4cda-7390-4885-b482-985caa9a726d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f6e4cda-7390-4885-b482-985caa9a726d" (UID: "3f6e4cda-7390-4885-b482-985caa9a726d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.729284 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f6e4cda-7390-4885-b482-985caa9a726d-kube-api-access-5vnxn" (OuterVolumeSpecName: "kube-api-access-5vnxn") pod "3f6e4cda-7390-4885-b482-985caa9a726d" (UID: "3f6e4cda-7390-4885-b482-985caa9a726d"). InnerVolumeSpecName "kube-api-access-5vnxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.825598 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f6e4cda-7390-4885-b482-985caa9a726d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.825637 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f6e4cda-7390-4885-b482-985caa9a726d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:03 crc kubenswrapper[4770]: I1209 15:45:03.825651 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vnxn\" (UniqueName: \"kubernetes.io/projected/3f6e4cda-7390-4885-b482-985caa9a726d-kube-api-access-5vnxn\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:04 crc kubenswrapper[4770]: I1209 15:45:04.161494 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" event={"ID":"3f6e4cda-7390-4885-b482-985caa9a726d","Type":"ContainerDied","Data":"1e09eb71fb5910adb1a67da852c6ef88674c2b0b9815901b6c0742d8158f72dd"} Dec 09 15:45:04 crc kubenswrapper[4770]: I1209 15:45:04.161806 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e09eb71fb5910adb1a67da852c6ef88674c2b0b9815901b6c0742d8158f72dd" Dec 09 15:45:04 crc kubenswrapper[4770]: I1209 15:45:04.161867 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421585-swlrk" Dec 09 15:45:04 crc kubenswrapper[4770]: I1209 15:45:04.629310 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4"] Dec 09 15:45:04 crc kubenswrapper[4770]: I1209 15:45:04.640129 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421540-nqnq4"] Dec 09 15:45:06 crc kubenswrapper[4770]: I1209 15:45:06.599949 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3acf34a-2bce-4aea-bd90-34be5cf1a752" path="/var/lib/kubelet/pods/f3acf34a-2bce-4aea-bd90-34be5cf1a752/volumes" Dec 09 15:45:08 crc kubenswrapper[4770]: E1209 15:45:08.601366 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:45:14 crc kubenswrapper[4770]: E1209 15:45:14.590224 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.402235 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7gjlc"] Dec 09 15:45:18 crc kubenswrapper[4770]: E1209 15:45:18.403416 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f6e4cda-7390-4885-b482-985caa9a726d" containerName="collect-profiles" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.403435 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6e4cda-7390-4885-b482-985caa9a726d" containerName="collect-profiles" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.403686 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f6e4cda-7390-4885-b482-985caa9a726d" containerName="collect-profiles" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.405684 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.427660 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gjlc"] Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.440595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l75g\" (UniqueName: \"kubernetes.io/projected/b9316838-b886-4050-8a15-92e8961b9792-kube-api-access-7l75g\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.440649 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-utilities\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.440807 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-catalog-content\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.542811 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l75g\" (UniqueName: \"kubernetes.io/projected/b9316838-b886-4050-8a15-92e8961b9792-kube-api-access-7l75g\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.542878 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-utilities\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.543102 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-catalog-content\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.543704 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-catalog-content\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.544222 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-utilities\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.574575 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l75g\" (UniqueName: \"kubernetes.io/projected/b9316838-b886-4050-8a15-92e8961b9792-kube-api-access-7l75g\") pod \"redhat-marketplace-7gjlc\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.613919 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvx98"] Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.616334 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.627052 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvx98"] Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.645302 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfd4p\" (UniqueName: \"kubernetes.io/projected/28a8e896-c732-492b-9607-f3a74ded2edf-kube-api-access-mfd4p\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.645543 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-catalog-content\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.645585 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-utilities\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.748223 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-catalog-content\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.748302 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-utilities\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.748388 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfd4p\" (UniqueName: \"kubernetes.io/projected/28a8e896-c732-492b-9607-f3a74ded2edf-kube-api-access-mfd4p\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.748638 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-catalog-content\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.748934 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-utilities\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.753574 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.769635 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfd4p\" (UniqueName: \"kubernetes.io/projected/28a8e896-c732-492b-9607-f3a74ded2edf-kube-api-access-mfd4p\") pod \"redhat-operators-vvx98\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:18 crc kubenswrapper[4770]: I1209 15:45:18.953997 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:19 crc kubenswrapper[4770]: I1209 15:45:19.282647 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gjlc"] Dec 09 15:45:20 crc kubenswrapper[4770]: I1209 15:45:20.060536 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvx98"] Dec 09 15:45:20 crc kubenswrapper[4770]: I1209 15:45:20.410024 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvx98" event={"ID":"28a8e896-c732-492b-9607-f3a74ded2edf","Type":"ContainerStarted","Data":"90bcd667aa7c6afedbf009e4bd64363685cf15e01cd71b19e56e596b8cbada19"} Dec 09 15:45:20 crc kubenswrapper[4770]: I1209 15:45:20.414815 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gjlc" event={"ID":"b9316838-b886-4050-8a15-92e8961b9792","Type":"ContainerStarted","Data":"39c266b1f0351b86265d98c18833ca578ee5fc3209d25cdde61dacf201bbffce"} Dec 09 15:45:21 crc kubenswrapper[4770]: I1209 15:45:21.433215 4770 generic.go:334] "Generic (PLEG): container finished" podID="b9316838-b886-4050-8a15-92e8961b9792" containerID="8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa" exitCode=0 Dec 09 15:45:21 crc kubenswrapper[4770]: I1209 15:45:21.433291 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gjlc" event={"ID":"b9316838-b886-4050-8a15-92e8961b9792","Type":"ContainerDied","Data":"8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa"} Dec 09 15:45:21 crc kubenswrapper[4770]: I1209 15:45:21.437140 4770 generic.go:334] "Generic (PLEG): container finished" podID="28a8e896-c732-492b-9607-f3a74ded2edf" containerID="f01ee9ef051f7b4057bc1d97cfe51c1ffa550db5ca3610dfceb5825c4c42f641" exitCode=0 Dec 09 15:45:21 crc kubenswrapper[4770]: I1209 15:45:21.437190 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvx98" event={"ID":"28a8e896-c732-492b-9607-f3a74ded2edf","Type":"ContainerDied","Data":"f01ee9ef051f7b4057bc1d97cfe51c1ffa550db5ca3610dfceb5825c4c42f641"} Dec 09 15:45:22 crc kubenswrapper[4770]: I1209 15:45:22.447341 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvx98" event={"ID":"28a8e896-c732-492b-9607-f3a74ded2edf","Type":"ContainerStarted","Data":"b18092346d80338693f056251068f2f675b16b9db12703419dc51cf4dc243323"} Dec 09 15:45:22 crc kubenswrapper[4770]: I1209 15:45:22.449431 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gjlc" event={"ID":"b9316838-b886-4050-8a15-92e8961b9792","Type":"ContainerStarted","Data":"e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3"} Dec 09 15:45:22 crc kubenswrapper[4770]: E1209 15:45:22.591868 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:45:23 crc kubenswrapper[4770]: I1209 15:45:23.461532 4770 generic.go:334] "Generic (PLEG): container finished" podID="b9316838-b886-4050-8a15-92e8961b9792" containerID="e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3" exitCode=0 Dec 09 15:45:23 crc kubenswrapper[4770]: I1209 15:45:23.461626 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gjlc" event={"ID":"b9316838-b886-4050-8a15-92e8961b9792","Type":"ContainerDied","Data":"e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3"} Dec 09 15:45:24 crc kubenswrapper[4770]: I1209 15:45:24.276705 4770 scope.go:117] "RemoveContainer" containerID="50a1cc15553812ad2f10915a554a60d2f92061c51707775c68d17ee44ff16270" Dec 09 15:45:25 crc kubenswrapper[4770]: I1209 15:45:25.511326 4770 generic.go:334] "Generic (PLEG): container finished" podID="28a8e896-c732-492b-9607-f3a74ded2edf" containerID="b18092346d80338693f056251068f2f675b16b9db12703419dc51cf4dc243323" exitCode=0 Dec 09 15:45:25 crc kubenswrapper[4770]: I1209 15:45:25.511388 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvx98" event={"ID":"28a8e896-c732-492b-9607-f3a74ded2edf","Type":"ContainerDied","Data":"b18092346d80338693f056251068f2f675b16b9db12703419dc51cf4dc243323"} Dec 09 15:45:26 crc kubenswrapper[4770]: I1209 15:45:26.521631 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gjlc" event={"ID":"b9316838-b886-4050-8a15-92e8961b9792","Type":"ContainerStarted","Data":"5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77"} Dec 09 15:45:26 crc kubenswrapper[4770]: I1209 15:45:26.526101 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvx98" event={"ID":"28a8e896-c732-492b-9607-f3a74ded2edf","Type":"ContainerStarted","Data":"7f86f7d0088d5a07a4a83b29d1b57ac0d9ddb17b30aa0efb701508b755a22989"} Dec 09 15:45:26 crc kubenswrapper[4770]: I1209 15:45:26.542347 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7gjlc" podStartSLOduration=4.425745424 podStartE2EDuration="8.542267282s" podCreationTimestamp="2025-12-09 15:45:18 +0000 UTC" firstStartedPulling="2025-12-09 15:45:21.436393301 +0000 UTC m=+4953.332595447" lastFinishedPulling="2025-12-09 15:45:25.552915159 +0000 UTC m=+4957.449117305" observedRunningTime="2025-12-09 15:45:26.54076886 +0000 UTC m=+4958.436971016" watchObservedRunningTime="2025-12-09 15:45:26.542267282 +0000 UTC m=+4958.438469428" Dec 09 15:45:26 crc kubenswrapper[4770]: I1209 15:45:26.569671 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvx98" podStartSLOduration=4.026166405 podStartE2EDuration="8.569651572s" podCreationTimestamp="2025-12-09 15:45:18 +0000 UTC" firstStartedPulling="2025-12-09 15:45:21.439356612 +0000 UTC m=+4953.335558748" lastFinishedPulling="2025-12-09 15:45:25.982841779 +0000 UTC m=+4957.879043915" observedRunningTime="2025-12-09 15:45:26.561554641 +0000 UTC m=+4958.457756777" watchObservedRunningTime="2025-12-09 15:45:26.569651572 +0000 UTC m=+4958.465853698" Dec 09 15:45:28 crc kubenswrapper[4770]: E1209 15:45:28.631299 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:45:28 crc kubenswrapper[4770]: I1209 15:45:28.754174 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:28 crc kubenswrapper[4770]: I1209 15:45:28.754233 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:28 crc kubenswrapper[4770]: I1209 15:45:28.812429 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:28 crc kubenswrapper[4770]: I1209 15:45:28.955915 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:28 crc kubenswrapper[4770]: I1209 15:45:28.955978 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:30 crc kubenswrapper[4770]: I1209 15:45:30.011112 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvx98" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="registry-server" probeResult="failure" output=< Dec 09 15:45:30 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Dec 09 15:45:30 crc kubenswrapper[4770]: > Dec 09 15:45:35 crc kubenswrapper[4770]: E1209 15:45:35.591662 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:45:38 crc kubenswrapper[4770]: I1209 15:45:38.818211 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:38 crc kubenswrapper[4770]: I1209 15:45:38.874574 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gjlc"] Dec 09 15:45:39 crc kubenswrapper[4770]: I1209 15:45:39.004539 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:39 crc kubenswrapper[4770]: I1209 15:45:39.078415 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:39 crc kubenswrapper[4770]: E1209 15:45:39.592059 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:45:39 crc kubenswrapper[4770]: I1209 15:45:39.675049 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7gjlc" podUID="b9316838-b886-4050-8a15-92e8961b9792" containerName="registry-server" containerID="cri-o://5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77" gracePeriod=2 Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.159778 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.327844 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l75g\" (UniqueName: \"kubernetes.io/projected/b9316838-b886-4050-8a15-92e8961b9792-kube-api-access-7l75g\") pod \"b9316838-b886-4050-8a15-92e8961b9792\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.327998 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-catalog-content\") pod \"b9316838-b886-4050-8a15-92e8961b9792\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.328144 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-utilities\") pod \"b9316838-b886-4050-8a15-92e8961b9792\" (UID: \"b9316838-b886-4050-8a15-92e8961b9792\") " Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.329125 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-utilities" (OuterVolumeSpecName: "utilities") pod "b9316838-b886-4050-8a15-92e8961b9792" (UID: "b9316838-b886-4050-8a15-92e8961b9792"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.337925 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9316838-b886-4050-8a15-92e8961b9792-kube-api-access-7l75g" (OuterVolumeSpecName: "kube-api-access-7l75g") pod "b9316838-b886-4050-8a15-92e8961b9792" (UID: "b9316838-b886-4050-8a15-92e8961b9792"). InnerVolumeSpecName "kube-api-access-7l75g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.356108 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9316838-b886-4050-8a15-92e8961b9792" (UID: "b9316838-b886-4050-8a15-92e8961b9792"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.430856 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.430893 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9316838-b886-4050-8a15-92e8961b9792-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.430908 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l75g\" (UniqueName: \"kubernetes.io/projected/b9316838-b886-4050-8a15-92e8961b9792-kube-api-access-7l75g\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.677017 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvx98"] Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.687530 4770 generic.go:334] "Generic (PLEG): container finished" podID="b9316838-b886-4050-8a15-92e8961b9792" containerID="5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77" exitCode=0 Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.687608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gjlc" event={"ID":"b9316838-b886-4050-8a15-92e8961b9792","Type":"ContainerDied","Data":"5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77"} Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.687630 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gjlc" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.687669 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gjlc" event={"ID":"b9316838-b886-4050-8a15-92e8961b9792","Type":"ContainerDied","Data":"39c266b1f0351b86265d98c18833ca578ee5fc3209d25cdde61dacf201bbffce"} Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.687697 4770 scope.go:117] "RemoveContainer" containerID="5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.687766 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vvx98" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="registry-server" containerID="cri-o://7f86f7d0088d5a07a4a83b29d1b57ac0d9ddb17b30aa0efb701508b755a22989" gracePeriod=2 Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.721932 4770 scope.go:117] "RemoveContainer" containerID="e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.746608 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gjlc"] Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.747554 4770 scope.go:117] "RemoveContainer" containerID="8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.758178 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gjlc"] Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.894043 4770 scope.go:117] "RemoveContainer" containerID="5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77" Dec 09 15:45:40 crc kubenswrapper[4770]: E1209 15:45:40.894523 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77\": container with ID starting with 5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77 not found: ID does not exist" containerID="5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.894562 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77"} err="failed to get container status \"5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77\": rpc error: code = NotFound desc = could not find container \"5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77\": container with ID starting with 5c483c7c7ee3b4e1a9fb851e4a935c898bed04729b9db32448664368f15f0f77 not found: ID does not exist" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.894591 4770 scope.go:117] "RemoveContainer" containerID="e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3" Dec 09 15:45:40 crc kubenswrapper[4770]: E1209 15:45:40.894995 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3\": container with ID starting with e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3 not found: ID does not exist" containerID="e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.895050 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3"} err="failed to get container status \"e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3\": rpc error: code = NotFound desc = could not find container \"e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3\": container with ID starting with e1c3fa209a61ec1179d3403d30a045b3cfff97d89330d38ab103787bd5496ab3 not found: ID does not exist" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.895073 4770 scope.go:117] "RemoveContainer" containerID="8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa" Dec 09 15:45:40 crc kubenswrapper[4770]: E1209 15:45:40.895320 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa\": container with ID starting with 8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa not found: ID does not exist" containerID="8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa" Dec 09 15:45:40 crc kubenswrapper[4770]: I1209 15:45:40.895342 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa"} err="failed to get container status \"8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa\": rpc error: code = NotFound desc = could not find container \"8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa\": container with ID starting with 8831fb0e21fa3c94aaa0bf7ec038a396ff37c619fc4596a82a9ee2de9aebf3aa not found: ID does not exist" Dec 09 15:45:41 crc kubenswrapper[4770]: I1209 15:45:41.699864 4770 generic.go:334] "Generic (PLEG): container finished" podID="28a8e896-c732-492b-9607-f3a74ded2edf" containerID="7f86f7d0088d5a07a4a83b29d1b57ac0d9ddb17b30aa0efb701508b755a22989" exitCode=0 Dec 09 15:45:41 crc kubenswrapper[4770]: I1209 15:45:41.699976 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvx98" event={"ID":"28a8e896-c732-492b-9607-f3a74ded2edf","Type":"ContainerDied","Data":"7f86f7d0088d5a07a4a83b29d1b57ac0d9ddb17b30aa0efb701508b755a22989"} Dec 09 15:45:41 crc kubenswrapper[4770]: I1209 15:45:41.965587 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.066512 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfd4p\" (UniqueName: \"kubernetes.io/projected/28a8e896-c732-492b-9607-f3a74ded2edf-kube-api-access-mfd4p\") pod \"28a8e896-c732-492b-9607-f3a74ded2edf\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.066564 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-utilities\") pod \"28a8e896-c732-492b-9607-f3a74ded2edf\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.066618 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-catalog-content\") pod \"28a8e896-c732-492b-9607-f3a74ded2edf\" (UID: \"28a8e896-c732-492b-9607-f3a74ded2edf\") " Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.067624 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-utilities" (OuterVolumeSpecName: "utilities") pod "28a8e896-c732-492b-9607-f3a74ded2edf" (UID: "28a8e896-c732-492b-9607-f3a74ded2edf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.072145 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a8e896-c732-492b-9607-f3a74ded2edf-kube-api-access-mfd4p" (OuterVolumeSpecName: "kube-api-access-mfd4p") pod "28a8e896-c732-492b-9607-f3a74ded2edf" (UID: "28a8e896-c732-492b-9607-f3a74ded2edf"). InnerVolumeSpecName "kube-api-access-mfd4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.168908 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfd4p\" (UniqueName: \"kubernetes.io/projected/28a8e896-c732-492b-9607-f3a74ded2edf-kube-api-access-mfd4p\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.168943 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.182669 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28a8e896-c732-492b-9607-f3a74ded2edf" (UID: "28a8e896-c732-492b-9607-f3a74ded2edf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.271632 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a8e896-c732-492b-9607-f3a74ded2edf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.602383 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9316838-b886-4050-8a15-92e8961b9792" path="/var/lib/kubelet/pods/b9316838-b886-4050-8a15-92e8961b9792/volumes" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.726587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvx98" event={"ID":"28a8e896-c732-492b-9607-f3a74ded2edf","Type":"ContainerDied","Data":"90bcd667aa7c6afedbf009e4bd64363685cf15e01cd71b19e56e596b8cbada19"} Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.727053 4770 scope.go:117] "RemoveContainer" containerID="7f86f7d0088d5a07a4a83b29d1b57ac0d9ddb17b30aa0efb701508b755a22989" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.727258 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvx98" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.756924 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvx98"] Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.765492 4770 scope.go:117] "RemoveContainer" containerID="b18092346d80338693f056251068f2f675b16b9db12703419dc51cf4dc243323" Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.770128 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vvx98"] Dec 09 15:45:42 crc kubenswrapper[4770]: I1209 15:45:42.799847 4770 scope.go:117] "RemoveContainer" containerID="f01ee9ef051f7b4057bc1d97cfe51c1ffa550db5ca3610dfceb5825c4c42f641" Dec 09 15:45:44 crc kubenswrapper[4770]: I1209 15:45:44.603098 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" path="/var/lib/kubelet/pods/28a8e896-c732-492b-9607-f3a74ded2edf/volumes" Dec 09 15:45:46 crc kubenswrapper[4770]: E1209 15:45:46.592185 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:45:53 crc kubenswrapper[4770]: E1209 15:45:53.591634 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:45:57 crc kubenswrapper[4770]: E1209 15:45:57.589984 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:46:05 crc kubenswrapper[4770]: E1209 15:46:05.590555 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:46:10 crc kubenswrapper[4770]: E1209 15:46:10.591954 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:46:14 crc kubenswrapper[4770]: I1209 15:46:14.243863 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:46:14 crc kubenswrapper[4770]: I1209 15:46:14.244474 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:46:17 crc kubenswrapper[4770]: E1209 15:46:17.591029 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:46:21 crc kubenswrapper[4770]: E1209 15:46:21.589779 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:46:28 crc kubenswrapper[4770]: E1209 15:46:28.598866 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:46:33 crc kubenswrapper[4770]: E1209 15:46:33.591690 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:46:39 crc kubenswrapper[4770]: E1209 15:46:39.592126 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:46:44 crc kubenswrapper[4770]: I1209 15:46:44.243499 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:46:44 crc kubenswrapper[4770]: I1209 15:46:44.243956 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:46:46 crc kubenswrapper[4770]: E1209 15:46:46.591966 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:46:54 crc kubenswrapper[4770]: E1209 15:46:54.592678 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:46:57 crc kubenswrapper[4770]: E1209 15:46:57.590326 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:47:07 crc kubenswrapper[4770]: E1209 15:47:07.590788 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:47:08 crc kubenswrapper[4770]: E1209 15:47:08.597574 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.243485 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.244101 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.244148 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.244964 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.245106 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" gracePeriod=600 Dec 09 15:47:14 crc kubenswrapper[4770]: E1209 15:47:14.369304 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.656650 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" exitCode=0 Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.656709 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8"} Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.656786 4770 scope.go:117] "RemoveContainer" containerID="1927e27317d66a915b08addb11dfd6d01110ef294a4200d4cc67b6485ff2f786" Dec 09 15:47:14 crc kubenswrapper[4770]: I1209 15:47:14.657921 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:47:14 crc kubenswrapper[4770]: E1209 15:47:14.658556 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:47:21 crc kubenswrapper[4770]: E1209 15:47:21.589287 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:47:22 crc kubenswrapper[4770]: E1209 15:47:22.590652 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:47:25 crc kubenswrapper[4770]: I1209 15:47:25.589255 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:47:25 crc kubenswrapper[4770]: E1209 15:47:25.590418 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:47:36 crc kubenswrapper[4770]: E1209 15:47:36.591959 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:47:36 crc kubenswrapper[4770]: E1209 15:47:36.592570 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:47:39 crc kubenswrapper[4770]: I1209 15:47:39.589166 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:47:39 crc kubenswrapper[4770]: E1209 15:47:39.590108 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:47:47 crc kubenswrapper[4770]: E1209 15:47:47.591869 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:47:47 crc kubenswrapper[4770]: E1209 15:47:47.592341 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:47:50 crc kubenswrapper[4770]: I1209 15:47:50.588787 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:47:50 crc kubenswrapper[4770]: E1209 15:47:50.589528 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:48:02 crc kubenswrapper[4770]: I1209 15:48:02.588915 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:48:02 crc kubenswrapper[4770]: E1209 15:48:02.589864 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:48:02 crc kubenswrapper[4770]: E1209 15:48:02.593580 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:48:02 crc kubenswrapper[4770]: I1209 15:48:02.594110 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:48:02 crc kubenswrapper[4770]: E1209 15:48:02.719540 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:48:02 crc kubenswrapper[4770]: E1209 15:48:02.719621 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:48:02 crc kubenswrapper[4770]: E1209 15:48:02.719773 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:48:02 crc kubenswrapper[4770]: E1209 15:48:02.720938 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:48:16 crc kubenswrapper[4770]: E1209 15:48:16.591993 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:48:17 crc kubenswrapper[4770]: I1209 15:48:17.588608 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:48:17 crc kubenswrapper[4770]: E1209 15:48:17.589278 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:48:17 crc kubenswrapper[4770]: E1209 15:48:17.589873 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:48:27 crc kubenswrapper[4770]: E1209 15:48:27.719967 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:48:27 crc kubenswrapper[4770]: E1209 15:48:27.720449 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:48:27 crc kubenswrapper[4770]: E1209 15:48:27.720569 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:48:27 crc kubenswrapper[4770]: E1209 15:48:27.721753 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:48:29 crc kubenswrapper[4770]: E1209 15:48:29.614246 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:48:32 crc kubenswrapper[4770]: I1209 15:48:32.589617 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:48:32 crc kubenswrapper[4770]: E1209 15:48:32.590692 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:48:40 crc kubenswrapper[4770]: E1209 15:48:40.592252 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:48:42 crc kubenswrapper[4770]: E1209 15:48:42.590517 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:48:47 crc kubenswrapper[4770]: I1209 15:48:47.588915 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:48:47 crc kubenswrapper[4770]: E1209 15:48:47.589796 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:48:53 crc kubenswrapper[4770]: E1209 15:48:53.591922 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:48:54 crc kubenswrapper[4770]: E1209 15:48:54.596875 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:49:00 crc kubenswrapper[4770]: I1209 15:49:00.590558 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:49:00 crc kubenswrapper[4770]: E1209 15:49:00.591342 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:49:07 crc kubenswrapper[4770]: E1209 15:49:07.590749 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:49:08 crc kubenswrapper[4770]: E1209 15:49:08.597947 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:49:14 crc kubenswrapper[4770]: I1209 15:49:14.588441 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:49:14 crc kubenswrapper[4770]: E1209 15:49:14.589307 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:49:19 crc kubenswrapper[4770]: E1209 15:49:19.590347 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:49:20 crc kubenswrapper[4770]: E1209 15:49:20.589167 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:49:26 crc kubenswrapper[4770]: I1209 15:49:26.589877 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:49:26 crc kubenswrapper[4770]: E1209 15:49:26.590745 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:49:33 crc kubenswrapper[4770]: E1209 15:49:33.591902 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:49:34 crc kubenswrapper[4770]: E1209 15:49:34.591181 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:49:41 crc kubenswrapper[4770]: I1209 15:49:41.588558 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:49:41 crc kubenswrapper[4770]: E1209 15:49:41.589299 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:49:45 crc kubenswrapper[4770]: E1209 15:49:45.592212 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:49:48 crc kubenswrapper[4770]: E1209 15:49:48.605970 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:49:56 crc kubenswrapper[4770]: I1209 15:49:56.589473 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:49:56 crc kubenswrapper[4770]: E1209 15:49:56.590583 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:49:56 crc kubenswrapper[4770]: E1209 15:49:56.591284 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:50:02 crc kubenswrapper[4770]: E1209 15:50:02.591688 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:50:07 crc kubenswrapper[4770]: E1209 15:50:07.590883 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:50:11 crc kubenswrapper[4770]: I1209 15:50:11.588987 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:50:11 crc kubenswrapper[4770]: E1209 15:50:11.591099 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:50:15 crc kubenswrapper[4770]: E1209 15:50:15.590974 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:50:19 crc kubenswrapper[4770]: E1209 15:50:19.591356 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:50:26 crc kubenswrapper[4770]: I1209 15:50:26.589374 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:50:26 crc kubenswrapper[4770]: E1209 15:50:26.590649 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:50:26 crc kubenswrapper[4770]: E1209 15:50:26.591436 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:50:34 crc kubenswrapper[4770]: E1209 15:50:34.592585 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:50:37 crc kubenswrapper[4770]: E1209 15:50:37.590928 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:50:38 crc kubenswrapper[4770]: I1209 15:50:38.598802 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:50:38 crc kubenswrapper[4770]: E1209 15:50:38.599519 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:50:46 crc kubenswrapper[4770]: E1209 15:50:46.590558 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:50:49 crc kubenswrapper[4770]: E1209 15:50:49.590393 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:50:52 crc kubenswrapper[4770]: I1209 15:50:52.589125 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:50:52 crc kubenswrapper[4770]: E1209 15:50:52.589977 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:50:58 crc kubenswrapper[4770]: E1209 15:50:58.617947 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:51:03 crc kubenswrapper[4770]: E1209 15:51:03.592561 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:51:06 crc kubenswrapper[4770]: I1209 15:51:06.588538 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:51:06 crc kubenswrapper[4770]: E1209 15:51:06.589343 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:51:11 crc kubenswrapper[4770]: I1209 15:51:11.540276 4770 trace.go:236] Trace[648288444]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (09-Dec-2025 15:51:10.413) (total time: 1126ms): Dec 09 15:51:11 crc kubenswrapper[4770]: Trace[648288444]: [1.126859591s] [1.126859591s] END Dec 09 15:51:11 crc kubenswrapper[4770]: E1209 15:51:11.590590 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:51:14 crc kubenswrapper[4770]: E1209 15:51:14.590327 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.397951 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m67ps"] Dec 09 15:51:19 crc kubenswrapper[4770]: E1209 15:51:19.400341 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="extract-utilities" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.400472 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="extract-utilities" Dec 09 15:51:19 crc kubenswrapper[4770]: E1209 15:51:19.400596 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="extract-content" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.400680 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="extract-content" Dec 09 15:51:19 crc kubenswrapper[4770]: E1209 15:51:19.400814 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9316838-b886-4050-8a15-92e8961b9792" containerName="extract-content" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.400888 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9316838-b886-4050-8a15-92e8961b9792" containerName="extract-content" Dec 09 15:51:19 crc kubenswrapper[4770]: E1209 15:51:19.401002 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9316838-b886-4050-8a15-92e8961b9792" containerName="registry-server" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.401102 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9316838-b886-4050-8a15-92e8961b9792" containerName="registry-server" Dec 09 15:51:19 crc kubenswrapper[4770]: E1209 15:51:19.401197 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="registry-server" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.401286 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="registry-server" Dec 09 15:51:19 crc kubenswrapper[4770]: E1209 15:51:19.401405 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9316838-b886-4050-8a15-92e8961b9792" containerName="extract-utilities" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.401496 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9316838-b886-4050-8a15-92e8961b9792" containerName="extract-utilities" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.401996 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9316838-b886-4050-8a15-92e8961b9792" containerName="registry-server" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.402129 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a8e896-c732-492b-9607-f3a74ded2edf" containerName="registry-server" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.404565 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.414019 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ld5\" (UniqueName: \"kubernetes.io/projected/c228da97-740b-4667-956b-a20be31c893b-kube-api-access-l5ld5\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.414186 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-utilities\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.414315 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-catalog-content\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.414686 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m67ps"] Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.517048 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-catalog-content\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.517256 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5ld5\" (UniqueName: \"kubernetes.io/projected/c228da97-740b-4667-956b-a20be31c893b-kube-api-access-l5ld5\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.517355 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-utilities\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.517910 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-catalog-content\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.517965 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-utilities\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.554548 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5ld5\" (UniqueName: \"kubernetes.io/projected/c228da97-740b-4667-956b-a20be31c893b-kube-api-access-l5ld5\") pod \"community-operators-m67ps\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.588142 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:51:19 crc kubenswrapper[4770]: E1209 15:51:19.588548 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:51:19 crc kubenswrapper[4770]: I1209 15:51:19.736437 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:20 crc kubenswrapper[4770]: I1209 15:51:20.280804 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m67ps"] Dec 09 15:51:21 crc kubenswrapper[4770]: I1209 15:51:21.255856 4770 generic.go:334] "Generic (PLEG): container finished" podID="aa405f9b-e080-47ea-8728-509d0244f8d7" containerID="e02fba805f539e75da6f4ffd159c8f5be578b3a149eae4d6205e3db213073e64" exitCode=2 Dec 09 15:51:21 crc kubenswrapper[4770]: I1209 15:51:21.255947 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" event={"ID":"aa405f9b-e080-47ea-8728-509d0244f8d7","Type":"ContainerDied","Data":"e02fba805f539e75da6f4ffd159c8f5be578b3a149eae4d6205e3db213073e64"} Dec 09 15:51:21 crc kubenswrapper[4770]: I1209 15:51:21.257581 4770 generic.go:334] "Generic (PLEG): container finished" podID="c228da97-740b-4667-956b-a20be31c893b" containerID="25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6" exitCode=0 Dec 09 15:51:21 crc kubenswrapper[4770]: I1209 15:51:21.257623 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m67ps" event={"ID":"c228da97-740b-4667-956b-a20be31c893b","Type":"ContainerDied","Data":"25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6"} Dec 09 15:51:21 crc kubenswrapper[4770]: I1209 15:51:21.257656 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m67ps" event={"ID":"c228da97-740b-4667-956b-a20be31c893b","Type":"ContainerStarted","Data":"7a57c4dd07faa48f6ef9efb094c46ab1806411c7ee6686f7d579cf9b9f42e55a"} Dec 09 15:51:22 crc kubenswrapper[4770]: I1209 15:51:22.848473 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.004658 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-492vq\" (UniqueName: \"kubernetes.io/projected/aa405f9b-e080-47ea-8728-509d0244f8d7-kube-api-access-492vq\") pod \"aa405f9b-e080-47ea-8728-509d0244f8d7\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.005039 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-ssh-key\") pod \"aa405f9b-e080-47ea-8728-509d0244f8d7\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.005264 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-inventory\") pod \"aa405f9b-e080-47ea-8728-509d0244f8d7\" (UID: \"aa405f9b-e080-47ea-8728-509d0244f8d7\") " Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.016643 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa405f9b-e080-47ea-8728-509d0244f8d7-kube-api-access-492vq" (OuterVolumeSpecName: "kube-api-access-492vq") pod "aa405f9b-e080-47ea-8728-509d0244f8d7" (UID: "aa405f9b-e080-47ea-8728-509d0244f8d7"). InnerVolumeSpecName "kube-api-access-492vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.037917 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aa405f9b-e080-47ea-8728-509d0244f8d7" (UID: "aa405f9b-e080-47ea-8728-509d0244f8d7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.039769 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-inventory" (OuterVolumeSpecName: "inventory") pod "aa405f9b-e080-47ea-8728-509d0244f8d7" (UID: "aa405f9b-e080-47ea-8728-509d0244f8d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.107847 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.107882 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-492vq\" (UniqueName: \"kubernetes.io/projected/aa405f9b-e080-47ea-8728-509d0244f8d7-kube-api-access-492vq\") on node \"crc\" DevicePath \"\"" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.107899 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa405f9b-e080-47ea-8728-509d0244f8d7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.277454 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" event={"ID":"aa405f9b-e080-47ea-8728-509d0244f8d7","Type":"ContainerDied","Data":"89c304c9e661c5c9ff10bb58e330f72f4a74a1b317ed948e630b2539922e8506"} Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.277510 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c304c9e661c5c9ff10bb58e330f72f4a74a1b317ed948e630b2539922e8506" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.277476 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q" Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.279750 4770 generic.go:334] "Generic (PLEG): container finished" podID="c228da97-740b-4667-956b-a20be31c893b" containerID="0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a" exitCode=0 Dec 09 15:51:23 crc kubenswrapper[4770]: I1209 15:51:23.279767 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m67ps" event={"ID":"c228da97-740b-4667-956b-a20be31c893b","Type":"ContainerDied","Data":"0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a"} Dec 09 15:51:24 crc kubenswrapper[4770]: E1209 15:51:24.589632 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:51:25 crc kubenswrapper[4770]: I1209 15:51:25.310386 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m67ps" event={"ID":"c228da97-740b-4667-956b-a20be31c893b","Type":"ContainerStarted","Data":"96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904"} Dec 09 15:51:25 crc kubenswrapper[4770]: I1209 15:51:25.338326 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m67ps" podStartSLOduration=3.420519357 podStartE2EDuration="6.338268226s" podCreationTimestamp="2025-12-09 15:51:19 +0000 UTC" firstStartedPulling="2025-12-09 15:51:21.259201041 +0000 UTC m=+5313.155403177" lastFinishedPulling="2025-12-09 15:51:24.17694991 +0000 UTC m=+5316.073152046" observedRunningTime="2025-12-09 15:51:25.329848695 +0000 UTC m=+5317.226050841" watchObservedRunningTime="2025-12-09 15:51:25.338268226 +0000 UTC m=+5317.234470362" Dec 09 15:51:28 crc kubenswrapper[4770]: E1209 15:51:28.598480 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:51:29 crc kubenswrapper[4770]: I1209 15:51:29.737072 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:29 crc kubenswrapper[4770]: I1209 15:51:29.737368 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:29 crc kubenswrapper[4770]: I1209 15:51:29.794568 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:30 crc kubenswrapper[4770]: I1209 15:51:30.402878 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:30 crc kubenswrapper[4770]: I1209 15:51:30.479221 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m67ps"] Dec 09 15:51:30 crc kubenswrapper[4770]: I1209 15:51:30.589044 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:51:30 crc kubenswrapper[4770]: E1209 15:51:30.589308 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:51:32 crc kubenswrapper[4770]: I1209 15:51:32.373029 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m67ps" podUID="c228da97-740b-4667-956b-a20be31c893b" containerName="registry-server" containerID="cri-o://96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904" gracePeriod=2 Dec 09 15:51:32 crc kubenswrapper[4770]: I1209 15:51:32.890107 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.032684 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5ld5\" (UniqueName: \"kubernetes.io/projected/c228da97-740b-4667-956b-a20be31c893b-kube-api-access-l5ld5\") pod \"c228da97-740b-4667-956b-a20be31c893b\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.032994 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-utilities\") pod \"c228da97-740b-4667-956b-a20be31c893b\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.033068 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-catalog-content\") pod \"c228da97-740b-4667-956b-a20be31c893b\" (UID: \"c228da97-740b-4667-956b-a20be31c893b\") " Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.034065 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-utilities" (OuterVolumeSpecName: "utilities") pod "c228da97-740b-4667-956b-a20be31c893b" (UID: "c228da97-740b-4667-956b-a20be31c893b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.101746 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c228da97-740b-4667-956b-a20be31c893b" (UID: "c228da97-740b-4667-956b-a20be31c893b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.135476 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.135518 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c228da97-740b-4667-956b-a20be31c893b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.388334 4770 generic.go:334] "Generic (PLEG): container finished" podID="c228da97-740b-4667-956b-a20be31c893b" containerID="96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904" exitCode=0 Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.388387 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m67ps" event={"ID":"c228da97-740b-4667-956b-a20be31c893b","Type":"ContainerDied","Data":"96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904"} Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.388402 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m67ps" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.388430 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m67ps" event={"ID":"c228da97-740b-4667-956b-a20be31c893b","Type":"ContainerDied","Data":"7a57c4dd07faa48f6ef9efb094c46ab1806411c7ee6686f7d579cf9b9f42e55a"} Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.388456 4770 scope.go:117] "RemoveContainer" containerID="96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.438470 4770 scope.go:117] "RemoveContainer" containerID="0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.652551 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c228da97-740b-4667-956b-a20be31c893b-kube-api-access-l5ld5" (OuterVolumeSpecName: "kube-api-access-l5ld5") pod "c228da97-740b-4667-956b-a20be31c893b" (UID: "c228da97-740b-4667-956b-a20be31c893b"). InnerVolumeSpecName "kube-api-access-l5ld5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.684461 4770 scope.go:117] "RemoveContainer" containerID="25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.748207 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5ld5\" (UniqueName: \"kubernetes.io/projected/c228da97-740b-4667-956b-a20be31c893b-kube-api-access-l5ld5\") on node \"crc\" DevicePath \"\"" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.823537 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m67ps"] Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.832818 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m67ps"] Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.866087 4770 scope.go:117] "RemoveContainer" containerID="96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904" Dec 09 15:51:33 crc kubenswrapper[4770]: E1209 15:51:33.866646 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904\": container with ID starting with 96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904 not found: ID does not exist" containerID="96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.866690 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904"} err="failed to get container status \"96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904\": rpc error: code = NotFound desc = could not find container \"96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904\": container with ID starting with 96caf2ee315d8fc095044cb48d75a8728a0d253e12684bfbb173c0444e7f1904 not found: ID does not exist" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.866720 4770 scope.go:117] "RemoveContainer" containerID="0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a" Dec 09 15:51:33 crc kubenswrapper[4770]: E1209 15:51:33.867138 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a\": container with ID starting with 0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a not found: ID does not exist" containerID="0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.867181 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a"} err="failed to get container status \"0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a\": rpc error: code = NotFound desc = could not find container \"0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a\": container with ID starting with 0c4d871ae913e9128fd868cd48d6545eaf41471941fba1e7ba9626c8d203b69a not found: ID does not exist" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.867195 4770 scope.go:117] "RemoveContainer" containerID="25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6" Dec 09 15:51:33 crc kubenswrapper[4770]: E1209 15:51:33.867673 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6\": container with ID starting with 25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6 not found: ID does not exist" containerID="25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6" Dec 09 15:51:33 crc kubenswrapper[4770]: I1209 15:51:33.867694 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6"} err="failed to get container status \"25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6\": rpc error: code = NotFound desc = could not find container \"25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6\": container with ID starting with 25a5a2df06d55ce4edfc559da160bba08b6f62bb0d0fb537666646950b8b34a6 not found: ID does not exist" Dec 09 15:51:34 crc kubenswrapper[4770]: I1209 15:51:34.599478 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c228da97-740b-4667-956b-a20be31c893b" path="/var/lib/kubelet/pods/c228da97-740b-4667-956b-a20be31c893b/volumes" Dec 09 15:51:36 crc kubenswrapper[4770]: E1209 15:51:36.591404 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:51:40 crc kubenswrapper[4770]: E1209 15:51:40.590670 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:51:44 crc kubenswrapper[4770]: I1209 15:51:44.589006 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:51:44 crc kubenswrapper[4770]: E1209 15:51:44.589798 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:51:49 crc kubenswrapper[4770]: E1209 15:51:49.593198 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:51:52 crc kubenswrapper[4770]: E1209 15:51:52.591777 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:51:58 crc kubenswrapper[4770]: I1209 15:51:58.594528 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:51:58 crc kubenswrapper[4770]: E1209 15:51:58.595271 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:52:00 crc kubenswrapper[4770]: E1209 15:52:00.590960 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.465351 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ck9c5/must-gather-9fljd"] Dec 09 15:52:03 crc kubenswrapper[4770]: E1209 15:52:03.466025 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c228da97-740b-4667-956b-a20be31c893b" containerName="extract-utilities" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.466037 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c228da97-740b-4667-956b-a20be31c893b" containerName="extract-utilities" Dec 09 15:52:03 crc kubenswrapper[4770]: E1209 15:52:03.466052 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c228da97-740b-4667-956b-a20be31c893b" containerName="extract-content" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.466059 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c228da97-740b-4667-956b-a20be31c893b" containerName="extract-content" Dec 09 15:52:03 crc kubenswrapper[4770]: E1209 15:52:03.466071 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c228da97-740b-4667-956b-a20be31c893b" containerName="registry-server" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.466077 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c228da97-740b-4667-956b-a20be31c893b" containerName="registry-server" Dec 09 15:52:03 crc kubenswrapper[4770]: E1209 15:52:03.466102 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa405f9b-e080-47ea-8728-509d0244f8d7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.466109 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa405f9b-e080-47ea-8728-509d0244f8d7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.466313 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa405f9b-e080-47ea-8728-509d0244f8d7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.466328 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c228da97-740b-4667-956b-a20be31c893b" containerName="registry-server" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.467394 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.470505 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ck9c5"/"openshift-service-ca.crt" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.489025 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ck9c5"/"default-dockercfg-bb7w9" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.489303 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ck9c5"/"kube-root-ca.crt" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.496658 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ck9c5/must-gather-9fljd"] Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.595354 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpdt7\" (UniqueName: \"kubernetes.io/projected/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-kube-api-access-kpdt7\") pod \"must-gather-9fljd\" (UID: \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\") " pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.595553 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-must-gather-output\") pod \"must-gather-9fljd\" (UID: \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\") " pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.697239 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpdt7\" (UniqueName: \"kubernetes.io/projected/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-kube-api-access-kpdt7\") pod \"must-gather-9fljd\" (UID: \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\") " pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.697381 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-must-gather-output\") pod \"must-gather-9fljd\" (UID: \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\") " pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.698169 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-must-gather-output\") pod \"must-gather-9fljd\" (UID: \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\") " pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.732941 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpdt7\" (UniqueName: \"kubernetes.io/projected/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-kube-api-access-kpdt7\") pod \"must-gather-9fljd\" (UID: \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\") " pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:52:03 crc kubenswrapper[4770]: I1209 15:52:03.790079 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:52:04 crc kubenswrapper[4770]: I1209 15:52:04.248717 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ck9c5/must-gather-9fljd"] Dec 09 15:52:04 crc kubenswrapper[4770]: W1209 15:52:04.253323 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e07f6f_2a28_494f_87a2_4c9cabd03ecd.slice/crio-41b837ce825169b7f0b37e44502a9b678ebec8223cb71acd7adeac6edb4461ec WatchSource:0}: Error finding container 41b837ce825169b7f0b37e44502a9b678ebec8223cb71acd7adeac6edb4461ec: Status 404 returned error can't find the container with id 41b837ce825169b7f0b37e44502a9b678ebec8223cb71acd7adeac6edb4461ec Dec 09 15:52:04 crc kubenswrapper[4770]: E1209 15:52:04.589833 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:52:04 crc kubenswrapper[4770]: I1209 15:52:04.725862 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/must-gather-9fljd" event={"ID":"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd","Type":"ContainerStarted","Data":"41b837ce825169b7f0b37e44502a9b678ebec8223cb71acd7adeac6edb4461ec"} Dec 09 15:52:12 crc kubenswrapper[4770]: I1209 15:52:12.842924 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/must-gather-9fljd" event={"ID":"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd","Type":"ContainerStarted","Data":"2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b"} Dec 09 15:52:12 crc kubenswrapper[4770]: I1209 15:52:12.843547 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/must-gather-9fljd" event={"ID":"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd","Type":"ContainerStarted","Data":"cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc"} Dec 09 15:52:12 crc kubenswrapper[4770]: I1209 15:52:12.864168 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ck9c5/must-gather-9fljd" podStartSLOduration=2.084831919 podStartE2EDuration="9.864101776s" podCreationTimestamp="2025-12-09 15:52:03 +0000 UTC" firstStartedPulling="2025-12-09 15:52:04.255717837 +0000 UTC m=+5356.151919983" lastFinishedPulling="2025-12-09 15:52:12.034987704 +0000 UTC m=+5363.931189840" observedRunningTime="2025-12-09 15:52:12.856564619 +0000 UTC m=+5364.752766765" watchObservedRunningTime="2025-12-09 15:52:12.864101776 +0000 UTC m=+5364.760303902" Dec 09 15:52:13 crc kubenswrapper[4770]: I1209 15:52:13.589086 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:52:13 crc kubenswrapper[4770]: E1209 15:52:13.589670 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:52:15 crc kubenswrapper[4770]: E1209 15:52:15.591371 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:52:16 crc kubenswrapper[4770]: E1209 15:52:16.591598 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.332058 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ck9c5/crc-debug-4x24m"] Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.334477 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.396917 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnjz6\" (UniqueName: \"kubernetes.io/projected/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-kube-api-access-mnjz6\") pod \"crc-debug-4x24m\" (UID: \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\") " pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.397165 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-host\") pod \"crc-debug-4x24m\" (UID: \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\") " pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.499162 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-host\") pod \"crc-debug-4x24m\" (UID: \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\") " pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.499324 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnjz6\" (UniqueName: \"kubernetes.io/projected/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-kube-api-access-mnjz6\") pod \"crc-debug-4x24m\" (UID: \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\") " pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.499590 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-host\") pod \"crc-debug-4x24m\" (UID: \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\") " pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.528678 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnjz6\" (UniqueName: \"kubernetes.io/projected/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-kube-api-access-mnjz6\") pod \"crc-debug-4x24m\" (UID: \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\") " pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.655981 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:52:17 crc kubenswrapper[4770]: I1209 15:52:17.894509 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/crc-debug-4x24m" event={"ID":"6ba70cb6-5b11-4b69-bc86-70afd5045ef9","Type":"ContainerStarted","Data":"f7fc68df23bde61a342802c2547ef2b2144ce826873e7eabddf2a9c4d1da6395"} Dec 09 15:52:26 crc kubenswrapper[4770]: I1209 15:52:26.588754 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:52:28 crc kubenswrapper[4770]: E1209 15:52:28.602720 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:52:28 crc kubenswrapper[4770]: E1209 15:52:28.602748 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:52:30 crc kubenswrapper[4770]: I1209 15:52:30.024203 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/crc-debug-4x24m" event={"ID":"6ba70cb6-5b11-4b69-bc86-70afd5045ef9","Type":"ContainerStarted","Data":"a9cd541ef84da7b1cc78a796e9d13ad08049c1dd840d9b06eef1aaf81d041d90"} Dec 09 15:52:30 crc kubenswrapper[4770]: I1209 15:52:30.027184 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"8e05582f6cc1c08a1110f5fd5979afe68a350503e91b64b764661d2248656855"} Dec 09 15:52:30 crc kubenswrapper[4770]: I1209 15:52:30.048162 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ck9c5/crc-debug-4x24m" podStartSLOduration=1.7575924729999999 podStartE2EDuration="13.048134445s" podCreationTimestamp="2025-12-09 15:52:17 +0000 UTC" firstStartedPulling="2025-12-09 15:52:17.706450407 +0000 UTC m=+5369.602652533" lastFinishedPulling="2025-12-09 15:52:28.996992369 +0000 UTC m=+5380.893194505" observedRunningTime="2025-12-09 15:52:30.038628665 +0000 UTC m=+5381.934830801" watchObservedRunningTime="2025-12-09 15:52:30.048134445 +0000 UTC m=+5381.944336581" Dec 09 15:52:40 crc kubenswrapper[4770]: E1209 15:52:40.590566 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:52:42 crc kubenswrapper[4770]: E1209 15:52:42.590468 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:52:54 crc kubenswrapper[4770]: E1209 15:52:54.593079 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:52:57 crc kubenswrapper[4770]: E1209 15:52:57.590466 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:52:59 crc kubenswrapper[4770]: I1209 15:52:59.445459 4770 generic.go:334] "Generic (PLEG): container finished" podID="6ba70cb6-5b11-4b69-bc86-70afd5045ef9" containerID="a9cd541ef84da7b1cc78a796e9d13ad08049c1dd840d9b06eef1aaf81d041d90" exitCode=0 Dec 09 15:52:59 crc kubenswrapper[4770]: I1209 15:52:59.445563 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/crc-debug-4x24m" event={"ID":"6ba70cb6-5b11-4b69-bc86-70afd5045ef9","Type":"ContainerDied","Data":"a9cd541ef84da7b1cc78a796e9d13ad08049c1dd840d9b06eef1aaf81d041d90"} Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.576767 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.620803 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ck9c5/crc-debug-4x24m"] Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.636019 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ck9c5/crc-debug-4x24m"] Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.719254 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-host\") pod \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\" (UID: \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\") " Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.719372 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnjz6\" (UniqueName: \"kubernetes.io/projected/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-kube-api-access-mnjz6\") pod \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\" (UID: \"6ba70cb6-5b11-4b69-bc86-70afd5045ef9\") " Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.719687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-host" (OuterVolumeSpecName: "host") pod "6ba70cb6-5b11-4b69-bc86-70afd5045ef9" (UID: "6ba70cb6-5b11-4b69-bc86-70afd5045ef9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.720295 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-host\") on node \"crc\" DevicePath \"\"" Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.726131 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-kube-api-access-mnjz6" (OuterVolumeSpecName: "kube-api-access-mnjz6") pod "6ba70cb6-5b11-4b69-bc86-70afd5045ef9" (UID: "6ba70cb6-5b11-4b69-bc86-70afd5045ef9"). InnerVolumeSpecName "kube-api-access-mnjz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:53:00 crc kubenswrapper[4770]: I1209 15:53:00.821914 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnjz6\" (UniqueName: \"kubernetes.io/projected/6ba70cb6-5b11-4b69-bc86-70afd5045ef9-kube-api-access-mnjz6\") on node \"crc\" DevicePath \"\"" Dec 09 15:53:01 crc kubenswrapper[4770]: I1209 15:53:01.466972 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7fc68df23bde61a342802c2547ef2b2144ce826873e7eabddf2a9c4d1da6395" Dec 09 15:53:01 crc kubenswrapper[4770]: I1209 15:53:01.467007 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/crc-debug-4x24m" Dec 09 15:53:01 crc kubenswrapper[4770]: I1209 15:53:01.844988 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ck9c5/crc-debug-d5lvk"] Dec 09 15:53:01 crc kubenswrapper[4770]: E1209 15:53:01.845438 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba70cb6-5b11-4b69-bc86-70afd5045ef9" containerName="container-00" Dec 09 15:53:01 crc kubenswrapper[4770]: I1209 15:53:01.845605 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba70cb6-5b11-4b69-bc86-70afd5045ef9" containerName="container-00" Dec 09 15:53:01 crc kubenswrapper[4770]: I1209 15:53:01.845933 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba70cb6-5b11-4b69-bc86-70afd5045ef9" containerName="container-00" Dec 09 15:53:01 crc kubenswrapper[4770]: I1209 15:53:01.846826 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:01 crc kubenswrapper[4770]: I1209 15:53:01.942848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5049534-4d45-44c2-a600-6dce09787fd5-host\") pod \"crc-debug-d5lvk\" (UID: \"d5049534-4d45-44c2-a600-6dce09787fd5\") " pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:01 crc kubenswrapper[4770]: I1209 15:53:01.943190 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgnd\" (UniqueName: \"kubernetes.io/projected/d5049534-4d45-44c2-a600-6dce09787fd5-kube-api-access-fcgnd\") pod \"crc-debug-d5lvk\" (UID: \"d5049534-4d45-44c2-a600-6dce09787fd5\") " pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:02 crc kubenswrapper[4770]: I1209 15:53:02.052232 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcgnd\" (UniqueName: \"kubernetes.io/projected/d5049534-4d45-44c2-a600-6dce09787fd5-kube-api-access-fcgnd\") pod \"crc-debug-d5lvk\" (UID: \"d5049534-4d45-44c2-a600-6dce09787fd5\") " pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:02 crc kubenswrapper[4770]: I1209 15:53:02.052366 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5049534-4d45-44c2-a600-6dce09787fd5-host\") pod \"crc-debug-d5lvk\" (UID: \"d5049534-4d45-44c2-a600-6dce09787fd5\") " pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:02 crc kubenswrapper[4770]: I1209 15:53:02.052498 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5049534-4d45-44c2-a600-6dce09787fd5-host\") pod \"crc-debug-d5lvk\" (UID: \"d5049534-4d45-44c2-a600-6dce09787fd5\") " pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:02 crc kubenswrapper[4770]: I1209 15:53:02.611088 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba70cb6-5b11-4b69-bc86-70afd5045ef9" path="/var/lib/kubelet/pods/6ba70cb6-5b11-4b69-bc86-70afd5045ef9/volumes" Dec 09 15:53:02 crc kubenswrapper[4770]: I1209 15:53:02.753381 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcgnd\" (UniqueName: \"kubernetes.io/projected/d5049534-4d45-44c2-a600-6dce09787fd5-kube-api-access-fcgnd\") pod \"crc-debug-d5lvk\" (UID: \"d5049534-4d45-44c2-a600-6dce09787fd5\") " pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:02 crc kubenswrapper[4770]: I1209 15:53:02.767476 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:02 crc kubenswrapper[4770]: W1209 15:53:02.809055 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5049534_4d45_44c2_a600_6dce09787fd5.slice/crio-fdd33af1a3f4ea8afa36fc9fd6222dcc35f252336334bc8f779faf4b266bbd1d WatchSource:0}: Error finding container fdd33af1a3f4ea8afa36fc9fd6222dcc35f252336334bc8f779faf4b266bbd1d: Status 404 returned error can't find the container with id fdd33af1a3f4ea8afa36fc9fd6222dcc35f252336334bc8f779faf4b266bbd1d Dec 09 15:53:03 crc kubenswrapper[4770]: I1209 15:53:03.484655 4770 generic.go:334] "Generic (PLEG): container finished" podID="d5049534-4d45-44c2-a600-6dce09787fd5" containerID="3918762170d8d3e0a4ae97c6eeea357b1af02c3658d0b67e411d4411012e8253" exitCode=1 Dec 09 15:53:03 crc kubenswrapper[4770]: I1209 15:53:03.484735 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" event={"ID":"d5049534-4d45-44c2-a600-6dce09787fd5","Type":"ContainerDied","Data":"3918762170d8d3e0a4ae97c6eeea357b1af02c3658d0b67e411d4411012e8253"} Dec 09 15:53:03 crc kubenswrapper[4770]: I1209 15:53:03.485037 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" event={"ID":"d5049534-4d45-44c2-a600-6dce09787fd5","Type":"ContainerStarted","Data":"fdd33af1a3f4ea8afa36fc9fd6222dcc35f252336334bc8f779faf4b266bbd1d"} Dec 09 15:53:03 crc kubenswrapper[4770]: I1209 15:53:03.540889 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ck9c5/crc-debug-d5lvk"] Dec 09 15:53:03 crc kubenswrapper[4770]: I1209 15:53:03.549137 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ck9c5/crc-debug-d5lvk"] Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.236816 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.425083 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5049534-4d45-44c2-a600-6dce09787fd5-host\") pod \"d5049534-4d45-44c2-a600-6dce09787fd5\" (UID: \"d5049534-4d45-44c2-a600-6dce09787fd5\") " Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.425222 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5049534-4d45-44c2-a600-6dce09787fd5-host" (OuterVolumeSpecName: "host") pod "d5049534-4d45-44c2-a600-6dce09787fd5" (UID: "d5049534-4d45-44c2-a600-6dce09787fd5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.425358 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcgnd\" (UniqueName: \"kubernetes.io/projected/d5049534-4d45-44c2-a600-6dce09787fd5-kube-api-access-fcgnd\") pod \"d5049534-4d45-44c2-a600-6dce09787fd5\" (UID: \"d5049534-4d45-44c2-a600-6dce09787fd5\") " Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.426016 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5049534-4d45-44c2-a600-6dce09787fd5-host\") on node \"crc\" DevicePath \"\"" Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.446015 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5049534-4d45-44c2-a600-6dce09787fd5-kube-api-access-fcgnd" (OuterVolumeSpecName: "kube-api-access-fcgnd") pod "d5049534-4d45-44c2-a600-6dce09787fd5" (UID: "d5049534-4d45-44c2-a600-6dce09787fd5"). InnerVolumeSpecName "kube-api-access-fcgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.514458 4770 scope.go:117] "RemoveContainer" containerID="3918762170d8d3e0a4ae97c6eeea357b1af02c3658d0b67e411d4411012e8253" Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.514495 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/crc-debug-d5lvk" Dec 09 15:53:05 crc kubenswrapper[4770]: I1209 15:53:05.527535 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcgnd\" (UniqueName: \"kubernetes.io/projected/d5049534-4d45-44c2-a600-6dce09787fd5-kube-api-access-fcgnd\") on node \"crc\" DevicePath \"\"" Dec 09 15:53:06 crc kubenswrapper[4770]: I1209 15:53:06.601082 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5049534-4d45-44c2-a600-6dce09787fd5" path="/var/lib/kubelet/pods/d5049534-4d45-44c2-a600-6dce09787fd5/volumes" Dec 09 15:53:08 crc kubenswrapper[4770]: E1209 15:53:08.603209 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:53:10 crc kubenswrapper[4770]: I1209 15:53:10.591017 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:53:10 crc kubenswrapper[4770]: E1209 15:53:10.711976 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:53:10 crc kubenswrapper[4770]: E1209 15:53:10.712048 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:53:10 crc kubenswrapper[4770]: E1209 15:53:10.712255 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:53:10 crc kubenswrapper[4770]: E1209 15:53:10.713632 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:53:22 crc kubenswrapper[4770]: E1209 15:53:22.590922 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:53:22 crc kubenswrapper[4770]: E1209 15:53:22.590934 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:53:35 crc kubenswrapper[4770]: E1209 15:53:35.592335 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:53:37 crc kubenswrapper[4770]: E1209 15:53:37.730790 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:53:37 crc kubenswrapper[4770]: E1209 15:53:37.731359 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:53:37 crc kubenswrapper[4770]: E1209 15:53:37.731502 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:53:37 crc kubenswrapper[4770]: E1209 15:53:37.732935 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:53:44 crc kubenswrapper[4770]: I1209 15:53:44.806158 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_08118160-2e03-4319-97ed-051b92b14c1e/init-config-reloader/0.log" Dec 09 15:53:45 crc kubenswrapper[4770]: I1209 15:53:45.049796 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_08118160-2e03-4319-97ed-051b92b14c1e/init-config-reloader/0.log" Dec 09 15:53:45 crc kubenswrapper[4770]: I1209 15:53:45.062602 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_08118160-2e03-4319-97ed-051b92b14c1e/alertmanager/0.log" Dec 09 15:53:45 crc kubenswrapper[4770]: I1209 15:53:45.104699 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_08118160-2e03-4319-97ed-051b92b14c1e/config-reloader/0.log" Dec 09 15:53:45 crc kubenswrapper[4770]: I1209 15:53:45.859220 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7dccbff898-2zrvn_1a50151f-6df2-4bd3-b8aa-edb8f5545b2c/barbican-api/0.log" Dec 09 15:53:45 crc kubenswrapper[4770]: I1209 15:53:45.938827 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8466dc55b6-5ppcl_a2242028-0a76-456d-b92c-28ccda87972d/barbican-keystone-listener/0.log" Dec 09 15:53:45 crc kubenswrapper[4770]: I1209 15:53:45.945051 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7dccbff898-2zrvn_1a50151f-6df2-4bd3-b8aa-edb8f5545b2c/barbican-api-log/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.111383 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8466dc55b6-5ppcl_a2242028-0a76-456d-b92c-28ccda87972d/barbican-keystone-listener-log/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.155103 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d6894fb8f-gdsw2_0c36a704-9c2f-4761-80b3-45215f34c1f6/barbican-worker/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.283309 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d6894fb8f-gdsw2_0c36a704-9c2f-4761-80b3-45215f34c1f6/barbican-worker-log/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.390384 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9lrgz_0efe3c6a-6cd4-4b70-9929-b207e4aecee3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.624314 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_46cbdc7f-5a87-4c97-a56e-910d75b00675/ceilometer-notification-agent/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.624592 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_46cbdc7f-5a87-4c97-a56e-910d75b00675/proxy-httpd/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.724477 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_46cbdc7f-5a87-4c97-a56e-910d75b00675/sg-core/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.817056 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fad8a60-e1ec-47ed-8aca-46b3aa3319d2/cinder-api-log/0.log" Dec 09 15:53:46 crc kubenswrapper[4770]: I1209 15:53:46.939805 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fad8a60-e1ec-47ed-8aca-46b3aa3319d2/cinder-api/0.log" Dec 09 15:53:47 crc kubenswrapper[4770]: I1209 15:53:47.022507 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_37fe0132-33c4-4bf9-98bb-43ae4b9c7902/cinder-scheduler/0.log" Dec 09 15:53:47 crc kubenswrapper[4770]: I1209 15:53:47.145952 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_37fe0132-33c4-4bf9-98bb-43ae4b9c7902/probe/0.log" Dec 09 15:53:47 crc kubenswrapper[4770]: I1209 15:53:47.331669 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_d2b8d36f-0bd8-4ac8-b673-15f5728d0a78/cloudkitty-api-log/0.log" Dec 09 15:53:47 crc kubenswrapper[4770]: I1209 15:53:47.454955 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_d2b8d36f-0bd8-4ac8-b673-15f5728d0a78/cloudkitty-api/0.log" Dec 09 15:53:47 crc kubenswrapper[4770]: I1209 15:53:47.688467 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_1ac324ef-d65f-421c-b382-9c321ae7d447/loki-compactor/0.log" Dec 09 15:53:47 crc kubenswrapper[4770]: I1209 15:53:47.779763 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-664b687b54-75lfm_dc16ff55-b814-4912-842a-2744c0450b51/loki-distributor/0.log" Dec 09 15:53:47 crc kubenswrapper[4770]: I1209 15:53:47.922991 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-bc75944f-86wfb_754ed0cc-ec25-45d4-b0d0-907d92e939fd/gateway/0.log" Dec 09 15:53:48 crc kubenswrapper[4770]: I1209 15:53:48.027072 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-bc75944f-rw588_bf0f3c4c-bbc9-484d-8153-e12ad4118c9a/gateway/0.log" Dec 09 15:53:48 crc kubenswrapper[4770]: I1209 15:53:48.133364 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_24f02fdc-5866-4325-8d48-1333cd9a33d9/loki-index-gateway/0.log" Dec 09 15:53:48 crc kubenswrapper[4770]: I1209 15:53:48.242790 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_67dab40a-3d7c-4737-bca9-28dc6280071c/loki-ingester/0.log" Dec 09 15:53:48 crc kubenswrapper[4770]: I1209 15:53:48.502970 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-5467947bf7-6tmxj_3d8fd93c-ff55-4b03-9024-52af60e3e632/loki-querier/0.log" Dec 09 15:53:48 crc kubenswrapper[4770]: E1209 15:53:48.607967 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:53:48 crc kubenswrapper[4770]: I1209 15:53:48.624335 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-7c8cd744d9-vzrhp_aeb389cf-bc24-4200-8561-a3c804f1d8c0/loki-query-frontend/0.log" Dec 09 15:53:48 crc kubenswrapper[4770]: I1209 15:53:48.835286 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-bmz6l_60261685-a8d1-4122-85a3-42157081385f/init/0.log" Dec 09 15:53:49 crc kubenswrapper[4770]: I1209 15:53:49.132434 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-bmz6l_60261685-a8d1-4122-85a3-42157081385f/init/0.log" Dec 09 15:53:49 crc kubenswrapper[4770]: I1209 15:53:49.181202 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-bmz6l_60261685-a8d1-4122-85a3-42157081385f/dnsmasq-dns/0.log" Dec 09 15:53:49 crc kubenswrapper[4770]: I1209 15:53:49.242198 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5x54n_d2fd4634-93f2-4bc7-8f5b-7cf34397626c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:49 crc kubenswrapper[4770]: I1209 15:53:49.513335 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6pd6q_aa405f9b-e080-47ea-8728-509d0244f8d7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:49 crc kubenswrapper[4770]: I1209 15:53:49.514918 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7f9fz_aa029105-c93b-48e9-8331-76d58f4794d8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:49 crc kubenswrapper[4770]: I1209 15:53:49.777878 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7lmc5_93cfef83-a919-435a-88c7-b6f2b2a6c480/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:49 crc kubenswrapper[4770]: I1209 15:53:49.815754 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-g6fwr_89a0c6ea-2af7-46b0-afeb-bdebd2de4b1d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:50 crc kubenswrapper[4770]: I1209 15:53:50.170472 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nqfvh_145ea2d4-9119-435e-aac0-ac0ee9eb29bf/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:50 crc kubenswrapper[4770]: I1209 15:53:50.321876 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qmmk7_b668c218-32c2-4a09-80d6-f98e619550bb/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:50 crc kubenswrapper[4770]: I1209 15:53:50.465427 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0271ab27-dfdb-4223-b3f3-fc82c2024a02/glance-httpd/0.log" Dec 09 15:53:50 crc kubenswrapper[4770]: I1209 15:53:50.488605 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0271ab27-dfdb-4223-b3f3-fc82c2024a02/glance-log/0.log" Dec 09 15:53:50 crc kubenswrapper[4770]: E1209 15:53:50.594144 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:53:50 crc kubenswrapper[4770]: I1209 15:53:50.666173 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6bd518e4-fbae-471d-8a84-930c03f57a61/glance-httpd/0.log" Dec 09 15:53:50 crc kubenswrapper[4770]: I1209 15:53:50.729817 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6bd518e4-fbae-471d-8a84-930c03f57a61/glance-log/0.log" Dec 09 15:53:50 crc kubenswrapper[4770]: I1209 15:53:50.988647 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29421541-9p2fg_35d24b29-d465-4502-bdf7-e4bd6479926c/keystone-cron/0.log" Dec 09 15:53:51 crc kubenswrapper[4770]: I1209 15:53:51.047549 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b79f96648-bcj77_34aa6241-b9e1-45b1-915b-2de7e264e2a0/keystone-api/0.log" Dec 09 15:53:51 crc kubenswrapper[4770]: I1209 15:53:51.177228 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a9bc8400-6001-4e33-9563-0cff42eceec2/kube-state-metrics/0.log" Dec 09 15:53:51 crc kubenswrapper[4770]: I1209 15:53:51.455513 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fcb9dd595-tqt29_678619ae-5986-49ce-b307-53661c4f94f9/neutron-api/0.log" Dec 09 15:53:51 crc kubenswrapper[4770]: I1209 15:53:51.550649 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fcb9dd595-tqt29_678619ae-5986-49ce-b307-53661c4f94f9/neutron-httpd/0.log" Dec 09 15:53:51 crc kubenswrapper[4770]: I1209 15:53:51.975991 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_cfa70b39-dc84-4b2a-ad61-9e01efa16ab8/nova-api-log/0.log" Dec 09 15:53:52 crc kubenswrapper[4770]: I1209 15:53:52.236932 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_44fa7178-8f34-4f79-b917-c3763e01a006/nova-cell0-conductor-conductor/0.log" Dec 09 15:53:52 crc kubenswrapper[4770]: I1209 15:53:52.362847 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_cfa70b39-dc84-4b2a-ad61-9e01efa16ab8/nova-api-api/0.log" Dec 09 15:53:52 crc kubenswrapper[4770]: I1209 15:53:52.530370 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_be69bcf3-ffee-4bfd-a1d2-e3c9337d0722/nova-cell1-conductor-conductor/0.log" Dec 09 15:53:52 crc kubenswrapper[4770]: I1209 15:53:52.731563 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f8dce108-23b0-4cc2-bedd-af46b899dcae/nova-cell1-novncproxy-novncproxy/0.log" Dec 09 15:53:52 crc kubenswrapper[4770]: I1209 15:53:52.828898 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba5eef0a-3956-4184-a76c-ab3ecc01f110/nova-metadata-log/0.log" Dec 09 15:53:53 crc kubenswrapper[4770]: I1209 15:53:53.025444 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_6088cd85-c911-4bfb-88cf-4c837adf9548/cloudkitty-proc/0.log" Dec 09 15:53:53 crc kubenswrapper[4770]: I1209 15:53:53.393922 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_16d7dad5-f6f3-418b-989c-2a21ca4d76ef/nova-scheduler-scheduler/0.log" Dec 09 15:53:53 crc kubenswrapper[4770]: I1209 15:53:53.920201 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8f05019-cb7b-46bb-bb57-2f8c6a9bba53/mysql-bootstrap/0.log" Dec 09 15:53:54 crc kubenswrapper[4770]: I1209 15:53:54.079449 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8f05019-cb7b-46bb-bb57-2f8c6a9bba53/mysql-bootstrap/0.log" Dec 09 15:53:54 crc kubenswrapper[4770]: I1209 15:53:54.141604 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8f05019-cb7b-46bb-bb57-2f8c6a9bba53/galera/0.log" Dec 09 15:53:54 crc kubenswrapper[4770]: I1209 15:53:54.357927 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d87ac62-20d5-476f-97d9-34d8698fc78f/mysql-bootstrap/0.log" Dec 09 15:53:54 crc kubenswrapper[4770]: I1209 15:53:54.527045 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d87ac62-20d5-476f-97d9-34d8698fc78f/mysql-bootstrap/0.log" Dec 09 15:53:54 crc kubenswrapper[4770]: I1209 15:53:54.537336 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d87ac62-20d5-476f-97d9-34d8698fc78f/galera/0.log" Dec 09 15:53:54 crc kubenswrapper[4770]: I1209 15:53:54.743686 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e82b342d-8682-4892-836b-6248fcea0d3f/openstackclient/0.log" Dec 09 15:53:54 crc kubenswrapper[4770]: I1209 15:53:54.846808 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5v8bz_c324e2da-48b3-4772-bcf4-ebb0dc2543eb/openstack-network-exporter/0.log" Dec 09 15:53:54 crc kubenswrapper[4770]: I1209 15:53:54.949717 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba5eef0a-3956-4184-a76c-ab3ecc01f110/nova-metadata-metadata/0.log" Dec 09 15:53:55 crc kubenswrapper[4770]: I1209 15:53:55.035707 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5ng4w_f087e8e2-8532-4abc-925b-574ebb448bde/ovsdb-server-init/0.log" Dec 09 15:53:55 crc kubenswrapper[4770]: I1209 15:53:55.627445 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5ng4w_f087e8e2-8532-4abc-925b-574ebb448bde/ovsdb-server/0.log" Dec 09 15:53:55 crc kubenswrapper[4770]: I1209 15:53:55.726669 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5ng4w_f087e8e2-8532-4abc-925b-574ebb448bde/ovsdb-server-init/0.log" Dec 09 15:53:55 crc kubenswrapper[4770]: I1209 15:53:55.747056 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5ng4w_f087e8e2-8532-4abc-925b-574ebb448bde/ovs-vswitchd/0.log" Dec 09 15:53:55 crc kubenswrapper[4770]: I1209 15:53:55.879270 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-slxb5_8eebc5e5-e737-4171-abed-1e04fa89b0b4/ovn-controller/0.log" Dec 09 15:53:55 crc kubenswrapper[4770]: I1209 15:53:55.964123 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_522fb59d-17cc-47a2-8c3d-1025aacfd292/ovn-northd/0.log" Dec 09 15:53:56 crc kubenswrapper[4770]: I1209 15:53:56.013603 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_522fb59d-17cc-47a2-8c3d-1025aacfd292/openstack-network-exporter/0.log" Dec 09 15:53:56 crc kubenswrapper[4770]: I1209 15:53:56.221855 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3917b78f-5515-4149-82d3-96a981c77ac5/ovsdbserver-nb/0.log" Dec 09 15:53:56 crc kubenswrapper[4770]: I1209 15:53:56.221916 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3917b78f-5515-4149-82d3-96a981c77ac5/openstack-network-exporter/0.log" Dec 09 15:53:56 crc kubenswrapper[4770]: I1209 15:53:56.410429 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_41190237-fd6b-45b3-b68e-ad67b77ea11d/openstack-network-exporter/0.log" Dec 09 15:53:56 crc kubenswrapper[4770]: I1209 15:53:56.752346 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_41190237-fd6b-45b3-b68e-ad67b77ea11d/ovsdbserver-sb/0.log" Dec 09 15:53:56 crc kubenswrapper[4770]: I1209 15:53:56.763384 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85564fd668-bwn85_6bac9177-f03c-4a16-b2c4-da456883ca22/placement-api/0.log" Dec 09 15:53:56 crc kubenswrapper[4770]: I1209 15:53:56.859876 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85564fd668-bwn85_6bac9177-f03c-4a16-b2c4-da456883ca22/placement-log/0.log" Dec 09 15:53:56 crc kubenswrapper[4770]: I1209 15:53:56.978977 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bb4d3067-7df3-4aae-ad4a-7e24c480d3f8/init-config-reloader/0.log" Dec 09 15:53:57 crc kubenswrapper[4770]: I1209 15:53:57.242569 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bb4d3067-7df3-4aae-ad4a-7e24c480d3f8/thanos-sidecar/0.log" Dec 09 15:53:57 crc kubenswrapper[4770]: I1209 15:53:57.259481 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bb4d3067-7df3-4aae-ad4a-7e24c480d3f8/config-reloader/0.log" Dec 09 15:53:57 crc kubenswrapper[4770]: I1209 15:53:57.303524 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bb4d3067-7df3-4aae-ad4a-7e24c480d3f8/init-config-reloader/0.log" Dec 09 15:53:57 crc kubenswrapper[4770]: I1209 15:53:57.348687 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bb4d3067-7df3-4aae-ad4a-7e24c480d3f8/prometheus/0.log" Dec 09 15:53:57 crc kubenswrapper[4770]: I1209 15:53:57.537789 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f777c49a-0725-4856-895f-06add0375093/setup-container/0.log" Dec 09 15:53:57 crc kubenswrapper[4770]: I1209 15:53:57.839332 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f777c49a-0725-4856-895f-06add0375093/setup-container/0.log" Dec 09 15:53:57 crc kubenswrapper[4770]: I1209 15:53:57.885140 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f777c49a-0725-4856-895f-06add0375093/rabbitmq/0.log" Dec 09 15:53:57 crc kubenswrapper[4770]: I1209 15:53:57.890791 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7caea4cd-eb43-420d-8c5e-835745de19e8/setup-container/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.112083 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7caea4cd-eb43-420d-8c5e-835745de19e8/setup-container/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.152380 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7caea4cd-eb43-420d-8c5e-835745de19e8/rabbitmq/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.177521 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-srdrq_ed33ffb9-0111-411d-bcf6-8f072d236a17/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.425246 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zffhg_0f455fc3-9b1f-48b0-9527-c9fa301c6b6d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.581628 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-dccdd6975-6g8sl_e9e75e98-4fff-4755-9908-1e0d4ac982bb/proxy-httpd/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.637953 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-dccdd6975-6g8sl_e9e75e98-4fff-4755-9908-1e0d4ac982bb/proxy-server/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.672317 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dshr4_b8f30831-ad4f-4009-b177-e645f911f5b4/swift-ring-rebalance/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.909535 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/account-reaper/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.915819 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/account-replicator/0.log" Dec 09 15:53:58 crc kubenswrapper[4770]: I1209 15:53:58.931016 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/account-auditor/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.125414 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/account-server/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.184249 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/container-auditor/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.209218 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/container-server/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.282767 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/container-replicator/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.385045 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/container-updater/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.454109 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/object-auditor/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.474123 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/object-expirer/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.578274 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/object-replicator/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.581646 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/object-server/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.715625 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/rsync/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.719180 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/object-updater/0.log" Dec 09 15:53:59 crc kubenswrapper[4770]: I1209 15:53:59.806797 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cbc15e71-9605-466b-8947-aa2ca716bc2d/swift-recon-cron/0.log" Dec 09 15:54:02 crc kubenswrapper[4770]: E1209 15:54:02.589855 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:54:05 crc kubenswrapper[4770]: E1209 15:54:05.590337 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:54:06 crc kubenswrapper[4770]: I1209 15:54:06.392034 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6fc21963-c17e-4378-938e-200a8497203e/memcached/0.log" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.109439 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wfj9c"] Dec 09 15:54:09 crc kubenswrapper[4770]: E1209 15:54:09.110561 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5049534-4d45-44c2-a600-6dce09787fd5" containerName="container-00" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.110576 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5049534-4d45-44c2-a600-6dce09787fd5" containerName="container-00" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.110834 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5049534-4d45-44c2-a600-6dce09787fd5" containerName="container-00" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.112640 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.126109 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfj9c"] Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.210226 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128803f7-9991-4b51-bd3c-76c4aa76855f-catalog-content\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.210655 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128803f7-9991-4b51-bd3c-76c4aa76855f-utilities\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.210770 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqws9\" (UniqueName: \"kubernetes.io/projected/128803f7-9991-4b51-bd3c-76c4aa76855f-kube-api-access-pqws9\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.312862 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqws9\" (UniqueName: \"kubernetes.io/projected/128803f7-9991-4b51-bd3c-76c4aa76855f-kube-api-access-pqws9\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.312988 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128803f7-9991-4b51-bd3c-76c4aa76855f-catalog-content\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.313065 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128803f7-9991-4b51-bd3c-76c4aa76855f-utilities\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.313562 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128803f7-9991-4b51-bd3c-76c4aa76855f-catalog-content\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.313574 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128803f7-9991-4b51-bd3c-76c4aa76855f-utilities\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.351524 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqws9\" (UniqueName: \"kubernetes.io/projected/128803f7-9991-4b51-bd3c-76c4aa76855f-kube-api-access-pqws9\") pod \"certified-operators-wfj9c\" (UID: \"128803f7-9991-4b51-bd3c-76c4aa76855f\") " pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:09 crc kubenswrapper[4770]: I1209 15:54:09.451789 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:10 crc kubenswrapper[4770]: I1209 15:54:10.065214 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfj9c"] Dec 09 15:54:10 crc kubenswrapper[4770]: I1209 15:54:10.168933 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfj9c" event={"ID":"128803f7-9991-4b51-bd3c-76c4aa76855f","Type":"ContainerStarted","Data":"edc7b628961ed5da1f6966920f2d90515a45d7faa4c7357579f92e25febc62da"} Dec 09 15:54:11 crc kubenswrapper[4770]: I1209 15:54:11.178332 4770 generic.go:334] "Generic (PLEG): container finished" podID="128803f7-9991-4b51-bd3c-76c4aa76855f" containerID="5a473fe73ae99e001a1b74c501fe04d1a458458b3e001606030b1d021995d9cc" exitCode=0 Dec 09 15:54:11 crc kubenswrapper[4770]: I1209 15:54:11.178645 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfj9c" event={"ID":"128803f7-9991-4b51-bd3c-76c4aa76855f","Type":"ContainerDied","Data":"5a473fe73ae99e001a1b74c501fe04d1a458458b3e001606030b1d021995d9cc"} Dec 09 15:54:17 crc kubenswrapper[4770]: E1209 15:54:17.592534 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:54:20 crc kubenswrapper[4770]: I1209 15:54:20.267063 4770 generic.go:334] "Generic (PLEG): container finished" podID="128803f7-9991-4b51-bd3c-76c4aa76855f" containerID="cfe87f1c92cb06e73b2727910f085093a45892e39e4fc8b0632a48f94b4bace4" exitCode=0 Dec 09 15:54:20 crc kubenswrapper[4770]: I1209 15:54:20.267129 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfj9c" event={"ID":"128803f7-9991-4b51-bd3c-76c4aa76855f","Type":"ContainerDied","Data":"cfe87f1c92cb06e73b2727910f085093a45892e39e4fc8b0632a48f94b4bace4"} Dec 09 15:54:20 crc kubenswrapper[4770]: E1209 15:54:20.590072 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:54:22 crc kubenswrapper[4770]: I1209 15:54:22.295518 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfj9c" event={"ID":"128803f7-9991-4b51-bd3c-76c4aa76855f","Type":"ContainerStarted","Data":"8b761bb170a827b2387c660871f6086b830a623c79df8eff22bc19f84afb583a"} Dec 09 15:54:22 crc kubenswrapper[4770]: I1209 15:54:22.315616 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wfj9c" podStartSLOduration=2.900170476 podStartE2EDuration="13.315586276s" podCreationTimestamp="2025-12-09 15:54:09 +0000 UTC" firstStartedPulling="2025-12-09 15:54:11.180892665 +0000 UTC m=+5483.077094801" lastFinishedPulling="2025-12-09 15:54:21.596308455 +0000 UTC m=+5493.492510601" observedRunningTime="2025-12-09 15:54:22.311961696 +0000 UTC m=+5494.208163882" watchObservedRunningTime="2025-12-09 15:54:22.315586276 +0000 UTC m=+5494.211788432" Dec 09 15:54:29 crc kubenswrapper[4770]: I1209 15:54:29.452774 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:29 crc kubenswrapper[4770]: I1209 15:54:29.453355 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:29 crc kubenswrapper[4770]: I1209 15:54:29.517074 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:30 crc kubenswrapper[4770]: I1209 15:54:30.449514 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wfj9c" Dec 09 15:54:30 crc kubenswrapper[4770]: I1209 15:54:30.530839 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfj9c"] Dec 09 15:54:30 crc kubenswrapper[4770]: I1209 15:54:30.576828 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lctvc"] Dec 09 15:54:30 crc kubenswrapper[4770]: I1209 15:54:30.577066 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lctvc" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerName="registry-server" containerID="cri-o://e559554ebcb0f72cdf859fe88160472f7b6e0a9f53b55924d178756eb4a2e86e" gracePeriod=2 Dec 09 15:54:30 crc kubenswrapper[4770]: E1209 15:54:30.589279 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:54:31 crc kubenswrapper[4770]: I1209 15:54:31.117300 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv_7cb6ff7a-fa85-4173-b3dd-333fe01ec347/util/0.log" Dec 09 15:54:31 crc kubenswrapper[4770]: I1209 15:54:31.389311 4770 generic.go:334] "Generic (PLEG): container finished" podID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerID="e559554ebcb0f72cdf859fe88160472f7b6e0a9f53b55924d178756eb4a2e86e" exitCode=0 Dec 09 15:54:31 crc kubenswrapper[4770]: I1209 15:54:31.389387 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lctvc" event={"ID":"e9c6d8eb-dff6-4490-85de-29e306e835ad","Type":"ContainerDied","Data":"e559554ebcb0f72cdf859fe88160472f7b6e0a9f53b55924d178756eb4a2e86e"} Dec 09 15:54:31 crc kubenswrapper[4770]: I1209 15:54:31.541902 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv_7cb6ff7a-fa85-4173-b3dd-333fe01ec347/pull/0.log" Dec 09 15:54:31 crc kubenswrapper[4770]: I1209 15:54:31.612668 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv_7cb6ff7a-fa85-4173-b3dd-333fe01ec347/util/0.log" Dec 09 15:54:31 crc kubenswrapper[4770]: I1209 15:54:31.655146 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv_7cb6ff7a-fa85-4173-b3dd-333fe01ec347/pull/0.log" Dec 09 15:54:31 crc kubenswrapper[4770]: I1209 15:54:31.980333 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv_7cb6ff7a-fa85-4173-b3dd-333fe01ec347/util/0.log" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.526093 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv_7cb6ff7a-fa85-4173-b3dd-333fe01ec347/extract/0.log" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.629052 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_65e7c723120bdfba43d79625bb955dbaa07f191b2c0f9a86874e38e1692zltv_7cb6ff7a-fa85-4173-b3dd-333fe01ec347/pull/0.log" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.793994 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.855550 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-68275_218b9184-d581-40d1-bc52-734507d47b65/kube-rbac-proxy/0.log" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.873187 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-utilities\") pod \"e9c6d8eb-dff6-4490-85de-29e306e835ad\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.873273 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb7lx\" (UniqueName: \"kubernetes.io/projected/e9c6d8eb-dff6-4490-85de-29e306e835ad-kube-api-access-qb7lx\") pod \"e9c6d8eb-dff6-4490-85de-29e306e835ad\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.873346 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-catalog-content\") pod \"e9c6d8eb-dff6-4490-85de-29e306e835ad\" (UID: \"e9c6d8eb-dff6-4490-85de-29e306e835ad\") " Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.876606 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-utilities" (OuterVolumeSpecName: "utilities") pod "e9c6d8eb-dff6-4490-85de-29e306e835ad" (UID: "e9c6d8eb-dff6-4490-85de-29e306e835ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.935231 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c6d8eb-dff6-4490-85de-29e306e835ad-kube-api-access-qb7lx" (OuterVolumeSpecName: "kube-api-access-qb7lx") pod "e9c6d8eb-dff6-4490-85de-29e306e835ad" (UID: "e9c6d8eb-dff6-4490-85de-29e306e835ad"). InnerVolumeSpecName "kube-api-access-qb7lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.977659 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.977936 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb7lx\" (UniqueName: \"kubernetes.io/projected/e9c6d8eb-dff6-4490-85de-29e306e835ad-kube-api-access-qb7lx\") on node \"crc\" DevicePath \"\"" Dec 09 15:54:32 crc kubenswrapper[4770]: I1209 15:54:32.998799 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9c6d8eb-dff6-4490-85de-29e306e835ad" (UID: "e9c6d8eb-dff6-4490-85de-29e306e835ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.053949 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-68275_218b9184-d581-40d1-bc52-734507d47b65/manager/0.log" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.079453 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c6d8eb-dff6-4490-85de-29e306e835ad-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.103237 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-twpm6_fadaf4e5-6b66-4dc3-b51e-e4700db03792/manager/0.log" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.119375 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-twpm6_fadaf4e5-6b66-4dc3-b51e-e4700db03792/kube-rbac-proxy/0.log" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.414790 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lctvc" event={"ID":"e9c6d8eb-dff6-4490-85de-29e306e835ad","Type":"ContainerDied","Data":"df8190881f76e01e0692173e1bb79de3b4ecaf1af1b7dd91d8736a5858af7152"} Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.414847 4770 scope.go:117] "RemoveContainer" containerID="e559554ebcb0f72cdf859fe88160472f7b6e0a9f53b55924d178756eb4a2e86e" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.414892 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lctvc" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.452272 4770 scope.go:117] "RemoveContainer" containerID="69845fcfc3660a0c52bc7e9b405d465c9f6c095ce695f5b93760ee821b15b39d" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.455655 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lctvc"] Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.469953 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lctvc"] Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.476892 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-ljx9f_706fc0bc-1168-477f-b4ce-b30ea5b70bcf/manager/0.log" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.493975 4770 scope.go:117] "RemoveContainer" containerID="3355157c8bc407f887f5a6e00930f41d3155b3ba9482294407684ab834e7c02e" Dec 09 15:54:33 crc kubenswrapper[4770]: E1209 15:54:33.591285 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:54:33 crc kubenswrapper[4770]: I1209 15:54:33.715436 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-ljx9f_706fc0bc-1168-477f-b4ce-b30ea5b70bcf/kube-rbac-proxy/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.001756 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-zx7w6_5b04f571-f2cc-4486-91c9-d6f9f710f7fd/manager/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.027122 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-zx7w6_5b04f571-f2cc-4486-91c9-d6f9f710f7fd/kube-rbac-proxy/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.193304 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6k7ff_e3ac9095-4890-4130-b7f2-00c7927f6890/kube-rbac-proxy/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.229956 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6k7ff_e3ac9095-4890-4130-b7f2-00c7927f6890/manager/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.356324 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-52xhp_60c59cb7-e525-45d2-a544-4b9a2dc6bbab/kube-rbac-proxy/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.430682 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-52xhp_60c59cb7-e525-45d2-a544-4b9a2dc6bbab/manager/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.515953 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-55h7r_01836e5a-2708-4b73-b24d-79f804c8e0ef/kube-rbac-proxy/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.602007 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" path="/var/lib/kubelet/pods/e9c6d8eb-dff6-4490-85de-29e306e835ad/volumes" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.674123 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-r2rm9_6c377cc4-9030-4cc7-96b9-68d9634e24da/kube-rbac-proxy/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.797213 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-r2rm9_6c377cc4-9030-4cc7-96b9-68d9634e24da/manager/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.821508 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-55h7r_01836e5a-2708-4b73-b24d-79f804c8e0ef/manager/0.log" Dec 09 15:54:34 crc kubenswrapper[4770]: I1209 15:54:34.973773 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-9ff9c_5dcc1318-bc40-49c5-b6e1-718c06af70f3/kube-rbac-proxy/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.136407 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-9ff9c_5dcc1318-bc40-49c5-b6e1-718c06af70f3/manager/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.180974 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-fjnk7_b3f7f32e-9cb9-42e8-8aca-86bb7b16479d/kube-rbac-proxy/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.294560 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-fjnk7_b3f7f32e-9cb9-42e8-8aca-86bb7b16479d/manager/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.332139 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-6rp4l_847e2267-4743-4e6d-b76f-81bb0402a8e2/kube-rbac-proxy/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.422806 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-6rp4l_847e2267-4743-4e6d-b76f-81bb0402a8e2/manager/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.543217 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-m4rlv_a697db6f-78dd-4f87-bafb-b1ad6ddfa241/kube-rbac-proxy/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.690630 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-m4rlv_a697db6f-78dd-4f87-bafb-b1ad6ddfa241/manager/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.757151 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-lvzw2_1e594101-5c21-4a4a-8027-39449d107481/kube-rbac-proxy/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.891029 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-lvzw2_1e594101-5c21-4a4a-8027-39449d107481/manager/0.log" Dec 09 15:54:35 crc kubenswrapper[4770]: I1209 15:54:35.942160 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-zvfn5_369cb688-d8db-443a-beef-6f0cf31b31cf/kube-rbac-proxy/0.log" Dec 09 15:54:36 crc kubenswrapper[4770]: I1209 15:54:36.042652 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-zvfn5_369cb688-d8db-443a-beef-6f0cf31b31cf/manager/0.log" Dec 09 15:54:36 crc kubenswrapper[4770]: I1209 15:54:36.131024 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f5bk9l_5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975/kube-rbac-proxy/0.log" Dec 09 15:54:36 crc kubenswrapper[4770]: I1209 15:54:36.198720 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f5bk9l_5d96dcbe-2cdc-4640-8c4b-8f5fc4e2f975/manager/0.log" Dec 09 15:54:36 crc kubenswrapper[4770]: I1209 15:54:36.519252 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8npb8_f296b8f1-163d-4831-add9-bc8b63e3bf77/registry-server/0.log" Dec 09 15:54:36 crc kubenswrapper[4770]: I1209 15:54:36.679631 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-574d99c486-2blm8_02f8a1b7-96a5-4f27-865b-941490944ff6/operator/0.log" Dec 09 15:54:36 crc kubenswrapper[4770]: I1209 15:54:36.683136 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-j48s5_2df32a14-ab02-48cf-94e4-5dd7b72fdcff/kube-rbac-proxy/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.073470 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-j48s5_2df32a14-ab02-48cf-94e4-5dd7b72fdcff/manager/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.159915 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-brn9f_703c76ff-d327-45f7-a9ae-2d60d7629d31/manager/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.181096 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-brn9f_703c76ff-d327-45f7-a9ae-2d60d7629d31/kube-rbac-proxy/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.396926 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7c59bdd89-4cf5f_8fa6c3c5-bb85-4c66-b304-fa19ecb453e4/manager/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.397515 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-c5xpp_8d7e5182-a5a9-4daf-b268-f967a207932c/operator/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.425626 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-6xd5q_02f392e6-53d4-4bdb-bb7c-3ff1e29266bd/kube-rbac-proxy/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.625149 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f89dd7bc5-2vmmm_37bafc12-f467-417c-b3f7-6fc18896b73f/kube-rbac-proxy/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.716570 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-6xd5q_02f392e6-53d4-4bdb-bb7c-3ff1e29266bd/manager/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.927193 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-rfq2b_d65c2397-adaa-461c-9a86-05901e7b3726/kube-rbac-proxy/0.log" Dec 09 15:54:37 crc kubenswrapper[4770]: I1209 15:54:37.976332 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-rfq2b_d65c2397-adaa-461c-9a86-05901e7b3726/manager/0.log" Dec 09 15:54:38 crc kubenswrapper[4770]: I1209 15:54:38.067896 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-wdqn5_7ff426e3-2995-401a-9587-b7277f96e1b3/kube-rbac-proxy/0.log" Dec 09 15:54:38 crc kubenswrapper[4770]: I1209 15:54:38.133366 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f89dd7bc5-2vmmm_37bafc12-f467-417c-b3f7-6fc18896b73f/manager/0.log" Dec 09 15:54:38 crc kubenswrapper[4770]: I1209 15:54:38.189209 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-wdqn5_7ff426e3-2995-401a-9587-b7277f96e1b3/manager/0.log" Dec 09 15:54:43 crc kubenswrapper[4770]: E1209 15:54:43.589901 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:54:44 crc kubenswrapper[4770]: I1209 15:54:44.244086 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:54:44 crc kubenswrapper[4770]: I1209 15:54:44.244174 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:54:48 crc kubenswrapper[4770]: E1209 15:54:48.597766 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:54:58 crc kubenswrapper[4770]: E1209 15:54:58.596648 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:55:01 crc kubenswrapper[4770]: I1209 15:55:01.755708 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1d87ac62-20d5-476f-97d9-34d8698fc78f" containerName="galera" probeResult="failure" output="command timed out" Dec 09 15:55:03 crc kubenswrapper[4770]: E1209 15:55:03.591418 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:55:03 crc kubenswrapper[4770]: I1209 15:55:03.997385 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kpkpk_af84b1fb-1cf8-467d-b4ab-c2a37bcefe0e/control-plane-machine-set-operator/0.log" Dec 09 15:55:04 crc kubenswrapper[4770]: I1209 15:55:04.007041 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-x9m4l_fe586163-823a-49a4-a93e-55e0cc485b8f/kube-rbac-proxy/0.log" Dec 09 15:55:04 crc kubenswrapper[4770]: I1209 15:55:04.167568 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-x9m4l_fe586163-823a-49a4-a93e-55e0cc485b8f/machine-api-operator/0.log" Dec 09 15:55:13 crc kubenswrapper[4770]: E1209 15:55:13.621975 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:55:14 crc kubenswrapper[4770]: I1209 15:55:14.244101 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:55:14 crc kubenswrapper[4770]: I1209 15:55:14.244204 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:55:16 crc kubenswrapper[4770]: E1209 15:55:16.591458 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:55:18 crc kubenswrapper[4770]: I1209 15:55:18.197817 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hfpn8_bb2fb259-5b01-4326-b16b-891048fa2e18/cert-manager-controller/0.log" Dec 09 15:55:18 crc kubenswrapper[4770]: I1209 15:55:18.419402 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-dp5sc_b035c358-ae16-4c56-adcb-4d271a8f6006/cert-manager-cainjector/0.log" Dec 09 15:55:18 crc kubenswrapper[4770]: I1209 15:55:18.471563 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-x7zjn_73b1fc7b-59c7-4427-932c-2a65a41c42e1/cert-manager-webhook/0.log" Dec 09 15:55:26 crc kubenswrapper[4770]: E1209 15:55:26.591311 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:55:30 crc kubenswrapper[4770]: E1209 15:55:30.590297 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:55:32 crc kubenswrapper[4770]: I1209 15:55:32.113288 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-g7sx7_a88f9c2f-0ff9-4cae-9dc3-541c94c1cdc9/nmstate-console-plugin/0.log" Dec 09 15:55:32 crc kubenswrapper[4770]: I1209 15:55:32.396594 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-82bn5_eb3ead0c-f191-4518-83b9-98216d653eba/kube-rbac-proxy/0.log" Dec 09 15:55:32 crc kubenswrapper[4770]: I1209 15:55:32.400221 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lg6q8_406c7a97-d031-4d16-a20d-050cba5596a5/nmstate-handler/0.log" Dec 09 15:55:32 crc kubenswrapper[4770]: I1209 15:55:32.495981 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-82bn5_eb3ead0c-f191-4518-83b9-98216d653eba/nmstate-metrics/0.log" Dec 09 15:55:32 crc kubenswrapper[4770]: I1209 15:55:32.830666 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-g49f6_c5ee01e4-882a-47a7-9050-ef1076cca725/nmstate-operator/0.log" Dec 09 15:55:32 crc kubenswrapper[4770]: I1209 15:55:32.890804 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-j58vx_b6014298-34b9-4a4e-8a5b-578bc2ae90d6/nmstate-webhook/0.log" Dec 09 15:55:41 crc kubenswrapper[4770]: E1209 15:55:41.589895 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:55:44 crc kubenswrapper[4770]: I1209 15:55:44.243168 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:55:44 crc kubenswrapper[4770]: I1209 15:55:44.243669 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:55:44 crc kubenswrapper[4770]: I1209 15:55:44.243717 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:55:44 crc kubenswrapper[4770]: I1209 15:55:44.244573 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e05582f6cc1c08a1110f5fd5979afe68a350503e91b64b764661d2248656855"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:55:44 crc kubenswrapper[4770]: I1209 15:55:44.244628 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://8e05582f6cc1c08a1110f5fd5979afe68a350503e91b64b764661d2248656855" gracePeriod=600 Dec 09 15:55:44 crc kubenswrapper[4770]: E1209 15:55:44.590115 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:55:45 crc kubenswrapper[4770]: I1209 15:55:45.173450 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="8e05582f6cc1c08a1110f5fd5979afe68a350503e91b64b764661d2248656855" exitCode=0 Dec 09 15:55:45 crc kubenswrapper[4770]: I1209 15:55:45.173522 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"8e05582f6cc1c08a1110f5fd5979afe68a350503e91b64b764661d2248656855"} Dec 09 15:55:45 crc kubenswrapper[4770]: I1209 15:55:45.173812 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb"} Dec 09 15:55:45 crc kubenswrapper[4770]: I1209 15:55:45.173870 4770 scope.go:117] "RemoveContainer" containerID="5314567754fc696deebcf8f4893624b6f52164461b949da467f17afdb4f95cb8" Dec 09 15:55:46 crc kubenswrapper[4770]: I1209 15:55:46.572642 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7f5c5648d4-9lc92_a562e446-afad-41e8-9169-41f2e14712a2/manager/0.log" Dec 09 15:55:46 crc kubenswrapper[4770]: I1209 15:55:46.594381 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7f5c5648d4-9lc92_a562e446-afad-41e8-9169-41f2e14712a2/kube-rbac-proxy/0.log" Dec 09 15:55:52 crc kubenswrapper[4770]: E1209 15:55:52.590912 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:55:58 crc kubenswrapper[4770]: E1209 15:55:58.598076 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:56:01 crc kubenswrapper[4770]: I1209 15:56:01.897581 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-2qk72_c30d2bd5-635c-4c4b-bd25-c0a0d91009ca/kube-rbac-proxy/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.032904 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-2qk72_c30d2bd5-635c-4c4b-bd25-c0a0d91009ca/controller/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.117817 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-frr-files/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.316334 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-metrics/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.329752 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-frr-files/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.341241 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-reloader/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.348240 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-reloader/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.601798 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-frr-files/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.604216 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-reloader/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.619623 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-metrics/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.682269 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-metrics/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.901156 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-frr-files/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.911580 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-reloader/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.945673 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/cp-metrics/0.log" Dec 09 15:56:02 crc kubenswrapper[4770]: I1209 15:56:02.971243 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/controller/0.log" Dec 09 15:56:03 crc kubenswrapper[4770]: I1209 15:56:03.153995 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/frr-metrics/0.log" Dec 09 15:56:03 crc kubenswrapper[4770]: I1209 15:56:03.164220 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/kube-rbac-proxy/0.log" Dec 09 15:56:03 crc kubenswrapper[4770]: I1209 15:56:03.251507 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/kube-rbac-proxy-frr/0.log" Dec 09 15:56:03 crc kubenswrapper[4770]: I1209 15:56:03.375170 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/reloader/0.log" Dec 09 15:56:03 crc kubenswrapper[4770]: I1209 15:56:03.516485 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-4tn79_35cc164c-de5c-43cd-8a8d-38684a35227e/frr-k8s-webhook-server/0.log" Dec 09 15:56:03 crc kubenswrapper[4770]: I1209 15:56:03.699040 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-699f6cb9bd-z46mv_52ee6ee8-2d9b-47f4-b058-7c85c1673f97/manager/0.log" Dec 09 15:56:03 crc kubenswrapper[4770]: I1209 15:56:03.949186 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7d5876576f-fnms9_a739688c-a6ba-4579-bbcf-7a13f53bc412/webhook-server/0.log" Dec 09 15:56:04 crc kubenswrapper[4770]: I1209 15:56:04.207360 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-b524m_d5627024-8337-474b-b04c-95c60e08308e/kube-rbac-proxy/0.log" Dec 09 15:56:04 crc kubenswrapper[4770]: I1209 15:56:04.698996 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9c7pl_0355fd4f-c3df-4794-a794-567f018e52fa/frr/0.log" Dec 09 15:56:04 crc kubenswrapper[4770]: I1209 15:56:04.785045 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-b524m_d5627024-8337-474b-b04c-95c60e08308e/speaker/0.log" Dec 09 15:56:06 crc kubenswrapper[4770]: E1209 15:56:06.591229 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:56:12 crc kubenswrapper[4770]: E1209 15:56:12.590778 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:56:18 crc kubenswrapper[4770]: I1209 15:56:18.391292 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp_57f2139a-7f61-46a6-b130-cce8398d7211/util/0.log" Dec 09 15:56:18 crc kubenswrapper[4770]: E1209 15:56:18.597025 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:56:18 crc kubenswrapper[4770]: I1209 15:56:18.630736 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp_57f2139a-7f61-46a6-b130-cce8398d7211/util/0.log" Dec 09 15:56:18 crc kubenswrapper[4770]: I1209 15:56:18.635542 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp_57f2139a-7f61-46a6-b130-cce8398d7211/pull/0.log" Dec 09 15:56:18 crc kubenswrapper[4770]: I1209 15:56:18.667390 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp_57f2139a-7f61-46a6-b130-cce8398d7211/pull/0.log" Dec 09 15:56:18 crc kubenswrapper[4770]: I1209 15:56:18.861482 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp_57f2139a-7f61-46a6-b130-cce8398d7211/util/0.log" Dec 09 15:56:18 crc kubenswrapper[4770]: I1209 15:56:18.909646 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp_57f2139a-7f61-46a6-b130-cce8398d7211/extract/0.log" Dec 09 15:56:18 crc kubenswrapper[4770]: I1209 15:56:18.927619 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8chgp_57f2139a-7f61-46a6-b130-cce8398d7211/pull/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.093516 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl_a6fdfcce-b460-4869-acac-7ca04cb0b308/util/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.233739 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl_a6fdfcce-b460-4869-acac-7ca04cb0b308/util/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.256441 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl_a6fdfcce-b460-4869-acac-7ca04cb0b308/pull/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.273521 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl_a6fdfcce-b460-4869-acac-7ca04cb0b308/pull/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.431900 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl_a6fdfcce-b460-4869-acac-7ca04cb0b308/util/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.450330 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl_a6fdfcce-b460-4869-acac-7ca04cb0b308/pull/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.552470 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210hwzsl_a6fdfcce-b460-4869-acac-7ca04cb0b308/extract/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.666112 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk_d603f7c6-6898-40e0-a1ba-8411253059af/util/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.868424 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk_d603f7c6-6898-40e0-a1ba-8411253059af/util/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.887566 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk_d603f7c6-6898-40e0-a1ba-8411253059af/pull/0.log" Dec 09 15:56:19 crc kubenswrapper[4770]: I1209 15:56:19.889684 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk_d603f7c6-6898-40e0-a1ba-8411253059af/pull/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.063875 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk_d603f7c6-6898-40e0-a1ba-8411253059af/extract/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.075102 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk_d603f7c6-6898-40e0-a1ba-8411253059af/pull/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.079312 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1gw7vk_d603f7c6-6898-40e0-a1ba-8411253059af/util/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.221632 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2_20fc30df-3858-4501-ba57-581d2c933e56/util/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.426587 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2_20fc30df-3858-4501-ba57-581d2c933e56/util/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.437488 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2_20fc30df-3858-4501-ba57-581d2c933e56/pull/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.501741 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2_20fc30df-3858-4501-ba57-581d2c933e56/pull/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.691872 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2_20fc30df-3858-4501-ba57-581d2c933e56/util/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.752366 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2_20fc30df-3858-4501-ba57-581d2c933e56/pull/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.771439 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839cmh2_20fc30df-3858-4501-ba57-581d2c933e56/extract/0.log" Dec 09 15:56:20 crc kubenswrapper[4770]: I1209 15:56:20.917103 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfj9c_128803f7-9991-4b51-bd3c-76c4aa76855f/extract-utilities/0.log" Dec 09 15:56:21 crc kubenswrapper[4770]: I1209 15:56:21.154910 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfj9c_128803f7-9991-4b51-bd3c-76c4aa76855f/extract-content/0.log" Dec 09 15:56:21 crc kubenswrapper[4770]: I1209 15:56:21.228690 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfj9c_128803f7-9991-4b51-bd3c-76c4aa76855f/extract-utilities/0.log" Dec 09 15:56:21 crc kubenswrapper[4770]: I1209 15:56:21.353557 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfj9c_128803f7-9991-4b51-bd3c-76c4aa76855f/extract-content/0.log" Dec 09 15:56:21 crc kubenswrapper[4770]: I1209 15:56:21.513671 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfj9c_128803f7-9991-4b51-bd3c-76c4aa76855f/extract-content/0.log" Dec 09 15:56:21 crc kubenswrapper[4770]: I1209 15:56:21.585574 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfj9c_128803f7-9991-4b51-bd3c-76c4aa76855f/extract-utilities/0.log" Dec 09 15:56:21 crc kubenswrapper[4770]: I1209 15:56:21.695005 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfj9c_128803f7-9991-4b51-bd3c-76c4aa76855f/registry-server/0.log" Dec 09 15:56:21 crc kubenswrapper[4770]: I1209 15:56:21.794312 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wzq58_1a59e82d-89cd-49ea-a28e-9ce9a16ea47b/extract-utilities/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.038104 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wzq58_1a59e82d-89cd-49ea-a28e-9ce9a16ea47b/extract-content/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.057469 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wzq58_1a59e82d-89cd-49ea-a28e-9ce9a16ea47b/extract-content/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.080067 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wzq58_1a59e82d-89cd-49ea-a28e-9ce9a16ea47b/extract-utilities/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.177968 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wzq58_1a59e82d-89cd-49ea-a28e-9ce9a16ea47b/extract-utilities/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.262204 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wzq58_1a59e82d-89cd-49ea-a28e-9ce9a16ea47b/extract-content/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.440798 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-t2q8w_639ac3bd-8610-4f95-98f8-ad53a5c0d1fd/marketplace-operator/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.589670 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7scr_133b114f-fd6f-429b-8d37-4ac8e0a48730/extract-utilities/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.806589 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7scr_133b114f-fd6f-429b-8d37-4ac8e0a48730/extract-content/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.905801 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7scr_133b114f-fd6f-429b-8d37-4ac8e0a48730/extract-content/0.log" Dec 09 15:56:22 crc kubenswrapper[4770]: I1209 15:56:22.932845 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7scr_133b114f-fd6f-429b-8d37-4ac8e0a48730/extract-utilities/0.log" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.095331 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7scr_133b114f-fd6f-429b-8d37-4ac8e0a48730/extract-content/0.log" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.104919 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wzq58_1a59e82d-89cd-49ea-a28e-9ce9a16ea47b/registry-server/0.log" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.112489 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7scr_133b114f-fd6f-429b-8d37-4ac8e0a48730/extract-utilities/0.log" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.154071 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-55zff"] Dec 09 15:56:23 crc kubenswrapper[4770]: E1209 15:56:23.154631 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerName="extract-utilities" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.154663 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerName="extract-utilities" Dec 09 15:56:23 crc kubenswrapper[4770]: E1209 15:56:23.154678 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerName="extract-content" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.154684 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerName="extract-content" Dec 09 15:56:23 crc kubenswrapper[4770]: E1209 15:56:23.154701 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerName="registry-server" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.154708 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerName="registry-server" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.154968 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c6d8eb-dff6-4490-85de-29e306e835ad" containerName="registry-server" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.157035 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.165517 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55zff"] Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.321898 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-catalog-content\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.321955 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgfr\" (UniqueName: \"kubernetes.io/projected/935227fa-9e5c-4caa-9d3f-02187f596702-kube-api-access-8zgfr\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.322089 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-utilities\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.353990 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7scr_133b114f-fd6f-429b-8d37-4ac8e0a48730/registry-server/0.log" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.366251 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vt9v_a277c80b-567a-4a1b-84ef-d0ee49dfe9bb/extract-utilities/0.log" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.423941 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-utilities\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.424121 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-catalog-content\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.424160 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgfr\" (UniqueName: \"kubernetes.io/projected/935227fa-9e5c-4caa-9d3f-02187f596702-kube-api-access-8zgfr\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.424472 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-utilities\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.424659 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-catalog-content\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.455751 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgfr\" (UniqueName: \"kubernetes.io/projected/935227fa-9e5c-4caa-9d3f-02187f596702-kube-api-access-8zgfr\") pod \"redhat-operators-55zff\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.491765 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.648799 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vt9v_a277c80b-567a-4a1b-84ef-d0ee49dfe9bb/extract-content/0.log" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.724836 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vt9v_a277c80b-567a-4a1b-84ef-d0ee49dfe9bb/extract-utilities/0.log" Dec 09 15:56:23 crc kubenswrapper[4770]: I1209 15:56:23.800459 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vt9v_a277c80b-567a-4a1b-84ef-d0ee49dfe9bb/extract-content/0.log" Dec 09 15:56:24 crc kubenswrapper[4770]: I1209 15:56:24.023676 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vt9v_a277c80b-567a-4a1b-84ef-d0ee49dfe9bb/extract-utilities/0.log" Dec 09 15:56:24 crc kubenswrapper[4770]: I1209 15:56:24.038430 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vt9v_a277c80b-567a-4a1b-84ef-d0ee49dfe9bb/extract-content/0.log" Dec 09 15:56:24 crc kubenswrapper[4770]: I1209 15:56:24.122173 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55zff"] Dec 09 15:56:24 crc kubenswrapper[4770]: I1209 15:56:24.228876 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55zff" event={"ID":"935227fa-9e5c-4caa-9d3f-02187f596702","Type":"ContainerStarted","Data":"a22b9beaff4ff51c1cc7309f36913c0826e19d5560a80acd2022eb0f66fea77c"} Dec 09 15:56:24 crc kubenswrapper[4770]: I1209 15:56:24.236661 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vt9v_a277c80b-567a-4a1b-84ef-d0ee49dfe9bb/registry-server/0.log" Dec 09 15:56:24 crc kubenswrapper[4770]: E1209 15:56:24.590079 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:56:25 crc kubenswrapper[4770]: I1209 15:56:25.239599 4770 generic.go:334] "Generic (PLEG): container finished" podID="935227fa-9e5c-4caa-9d3f-02187f596702" containerID="987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08" exitCode=0 Dec 09 15:56:25 crc kubenswrapper[4770]: I1209 15:56:25.239665 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55zff" event={"ID":"935227fa-9e5c-4caa-9d3f-02187f596702","Type":"ContainerDied","Data":"987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08"} Dec 09 15:56:26 crc kubenswrapper[4770]: I1209 15:56:26.251832 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55zff" event={"ID":"935227fa-9e5c-4caa-9d3f-02187f596702","Type":"ContainerStarted","Data":"1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108"} Dec 09 15:56:27 crc kubenswrapper[4770]: I1209 15:56:27.950037 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nmdhp"] Dec 09 15:56:27 crc kubenswrapper[4770]: I1209 15:56:27.952538 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:27 crc kubenswrapper[4770]: I1209 15:56:27.964336 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmdhp"] Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.148952 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7bms\" (UniqueName: \"kubernetes.io/projected/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-kube-api-access-b7bms\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.149544 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-catalog-content\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.149638 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-utilities\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.252181 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7bms\" (UniqueName: \"kubernetes.io/projected/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-kube-api-access-b7bms\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.252537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-catalog-content\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.252637 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-utilities\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.252997 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-catalog-content\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.253012 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-utilities\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.284318 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7bms\" (UniqueName: \"kubernetes.io/projected/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-kube-api-access-b7bms\") pod \"redhat-marketplace-nmdhp\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.345769 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:28 crc kubenswrapper[4770]: W1209 15:56:28.909856 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3106d42e_85ab_4a60_93aa_8e08f7acc6a7.slice/crio-8d1b0869089aa1818bcedcefeeaacc70ee1c5808d2a50d4c66ffda34953f8315 WatchSource:0}: Error finding container 8d1b0869089aa1818bcedcefeeaacc70ee1c5808d2a50d4c66ffda34953f8315: Status 404 returned error can't find the container with id 8d1b0869089aa1818bcedcefeeaacc70ee1c5808d2a50d4c66ffda34953f8315 Dec 09 15:56:28 crc kubenswrapper[4770]: I1209 15:56:28.918339 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmdhp"] Dec 09 15:56:29 crc kubenswrapper[4770]: I1209 15:56:29.280933 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmdhp" event={"ID":"3106d42e-85ab-4a60-93aa-8e08f7acc6a7","Type":"ContainerStarted","Data":"8d1b0869089aa1818bcedcefeeaacc70ee1c5808d2a50d4c66ffda34953f8315"} Dec 09 15:56:30 crc kubenswrapper[4770]: I1209 15:56:30.290166 4770 generic.go:334] "Generic (PLEG): container finished" podID="935227fa-9e5c-4caa-9d3f-02187f596702" containerID="1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108" exitCode=0 Dec 09 15:56:30 crc kubenswrapper[4770]: I1209 15:56:30.290260 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55zff" event={"ID":"935227fa-9e5c-4caa-9d3f-02187f596702","Type":"ContainerDied","Data":"1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108"} Dec 09 15:56:30 crc kubenswrapper[4770]: I1209 15:56:30.300303 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmdhp" event={"ID":"3106d42e-85ab-4a60-93aa-8e08f7acc6a7","Type":"ContainerStarted","Data":"1b8cce4f6d51f04c2bdc87650757e3971fb6738c1b95c5063411b970766fde71"} Dec 09 15:56:31 crc kubenswrapper[4770]: I1209 15:56:31.310184 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55zff" event={"ID":"935227fa-9e5c-4caa-9d3f-02187f596702","Type":"ContainerStarted","Data":"2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695"} Dec 09 15:56:31 crc kubenswrapper[4770]: I1209 15:56:31.312761 4770 generic.go:334] "Generic (PLEG): container finished" podID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerID="1b8cce4f6d51f04c2bdc87650757e3971fb6738c1b95c5063411b970766fde71" exitCode=0 Dec 09 15:56:31 crc kubenswrapper[4770]: I1209 15:56:31.312787 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmdhp" event={"ID":"3106d42e-85ab-4a60-93aa-8e08f7acc6a7","Type":"ContainerDied","Data":"1b8cce4f6d51f04c2bdc87650757e3971fb6738c1b95c5063411b970766fde71"} Dec 09 15:56:31 crc kubenswrapper[4770]: I1209 15:56:31.336357 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-55zff" podStartSLOduration=2.685355568 podStartE2EDuration="8.336304409s" podCreationTimestamp="2025-12-09 15:56:23 +0000 UTC" firstStartedPulling="2025-12-09 15:56:25.241719739 +0000 UTC m=+5617.137921875" lastFinishedPulling="2025-12-09 15:56:30.89266858 +0000 UTC m=+5622.788870716" observedRunningTime="2025-12-09 15:56:31.335744413 +0000 UTC m=+5623.231946549" watchObservedRunningTime="2025-12-09 15:56:31.336304409 +0000 UTC m=+5623.232506545" Dec 09 15:56:31 crc kubenswrapper[4770]: E1209 15:56:31.590488 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:56:33 crc kubenswrapper[4770]: I1209 15:56:33.334457 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmdhp" event={"ID":"3106d42e-85ab-4a60-93aa-8e08f7acc6a7","Type":"ContainerStarted","Data":"d458249fa991d6d38db8ec3ef429a6c96836dc7f644ed73f919dce1d1167ef34"} Dec 09 15:56:33 crc kubenswrapper[4770]: I1209 15:56:33.492452 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:33 crc kubenswrapper[4770]: I1209 15:56:33.493038 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:34 crc kubenswrapper[4770]: I1209 15:56:34.544302 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-55zff" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="registry-server" probeResult="failure" output=< Dec 09 15:56:34 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Dec 09 15:56:34 crc kubenswrapper[4770]: > Dec 09 15:56:35 crc kubenswrapper[4770]: I1209 15:56:35.354253 4770 generic.go:334] "Generic (PLEG): container finished" podID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerID="d458249fa991d6d38db8ec3ef429a6c96836dc7f644ed73f919dce1d1167ef34" exitCode=0 Dec 09 15:56:35 crc kubenswrapper[4770]: I1209 15:56:35.354298 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmdhp" event={"ID":"3106d42e-85ab-4a60-93aa-8e08f7acc6a7","Type":"ContainerDied","Data":"d458249fa991d6d38db8ec3ef429a6c96836dc7f644ed73f919dce1d1167ef34"} Dec 09 15:56:37 crc kubenswrapper[4770]: I1209 15:56:37.372868 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmdhp" event={"ID":"3106d42e-85ab-4a60-93aa-8e08f7acc6a7","Type":"ContainerStarted","Data":"45bbeb76476c27901771d2f5b3cd60c955796d572c8d082f7656a453f9b901ab"} Dec 09 15:56:37 crc kubenswrapper[4770]: I1209 15:56:37.406597 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nmdhp" podStartSLOduration=5.222192885 podStartE2EDuration="10.406572981s" podCreationTimestamp="2025-12-09 15:56:27 +0000 UTC" firstStartedPulling="2025-12-09 15:56:31.315700963 +0000 UTC m=+5623.211903099" lastFinishedPulling="2025-12-09 15:56:36.500081059 +0000 UTC m=+5628.396283195" observedRunningTime="2025-12-09 15:56:37.391436866 +0000 UTC m=+5629.287639022" watchObservedRunningTime="2025-12-09 15:56:37.406572981 +0000 UTC m=+5629.302775117" Dec 09 15:56:38 crc kubenswrapper[4770]: I1209 15:56:38.347016 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:38 crc kubenswrapper[4770]: I1209 15:56:38.347408 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:38 crc kubenswrapper[4770]: I1209 15:56:38.401155 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:38 crc kubenswrapper[4770]: E1209 15:56:38.597582 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:56:40 crc kubenswrapper[4770]: I1209 15:56:40.292873 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-gtnb5_80dad146-0216-45e2-9007-7c42769b1cde/prometheus-operator/0.log" Dec 09 15:56:40 crc kubenswrapper[4770]: I1209 15:56:40.485344 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bb95f858b-8lvcg_46377460-fbee-4d96-99da-d202b1cf4988/prometheus-operator-admission-webhook/0.log" Dec 09 15:56:40 crc kubenswrapper[4770]: I1209 15:56:40.564838 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bb95f858b-tm97g_e6837196-3529-4d41-ad3a-103cff3d6fa6/prometheus-operator-admission-webhook/0.log" Dec 09 15:56:40 crc kubenswrapper[4770]: I1209 15:56:40.724404 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-6zbhb_b4b6e7d6-2797-4d98-bf28-e8e458a538e3/operator/0.log" Dec 09 15:56:40 crc kubenswrapper[4770]: I1209 15:56:40.810293 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-t4rsk_e8747f27-08b4-4075-9275-1f3cf8d5edea/perses-operator/0.log" Dec 09 15:56:42 crc kubenswrapper[4770]: E1209 15:56:42.590308 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:56:43 crc kubenswrapper[4770]: I1209 15:56:43.545267 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:43 crc kubenswrapper[4770]: I1209 15:56:43.596256 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:43 crc kubenswrapper[4770]: I1209 15:56:43.794257 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55zff"] Dec 09 15:56:45 crc kubenswrapper[4770]: I1209 15:56:45.448829 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-55zff" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="registry-server" containerID="cri-o://2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695" gracePeriod=2 Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.067388 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.219874 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-utilities\") pod \"935227fa-9e5c-4caa-9d3f-02187f596702\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.220012 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zgfr\" (UniqueName: \"kubernetes.io/projected/935227fa-9e5c-4caa-9d3f-02187f596702-kube-api-access-8zgfr\") pod \"935227fa-9e5c-4caa-9d3f-02187f596702\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.220091 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-catalog-content\") pod \"935227fa-9e5c-4caa-9d3f-02187f596702\" (UID: \"935227fa-9e5c-4caa-9d3f-02187f596702\") " Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.221330 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-utilities" (OuterVolumeSpecName: "utilities") pod "935227fa-9e5c-4caa-9d3f-02187f596702" (UID: "935227fa-9e5c-4caa-9d3f-02187f596702"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.235684 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/935227fa-9e5c-4caa-9d3f-02187f596702-kube-api-access-8zgfr" (OuterVolumeSpecName: "kube-api-access-8zgfr") pod "935227fa-9e5c-4caa-9d3f-02187f596702" (UID: "935227fa-9e5c-4caa-9d3f-02187f596702"). InnerVolumeSpecName "kube-api-access-8zgfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.323213 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.323251 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zgfr\" (UniqueName: \"kubernetes.io/projected/935227fa-9e5c-4caa-9d3f-02187f596702-kube-api-access-8zgfr\") on node \"crc\" DevicePath \"\"" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.336106 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "935227fa-9e5c-4caa-9d3f-02187f596702" (UID: "935227fa-9e5c-4caa-9d3f-02187f596702"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.424901 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935227fa-9e5c-4caa-9d3f-02187f596702-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.461235 4770 generic.go:334] "Generic (PLEG): container finished" podID="935227fa-9e5c-4caa-9d3f-02187f596702" containerID="2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695" exitCode=0 Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.461286 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55zff" event={"ID":"935227fa-9e5c-4caa-9d3f-02187f596702","Type":"ContainerDied","Data":"2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695"} Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.461310 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55zff" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.461327 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55zff" event={"ID":"935227fa-9e5c-4caa-9d3f-02187f596702","Type":"ContainerDied","Data":"a22b9beaff4ff51c1cc7309f36913c0826e19d5560a80acd2022eb0f66fea77c"} Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.461353 4770 scope.go:117] "RemoveContainer" containerID="2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.500437 4770 scope.go:117] "RemoveContainer" containerID="1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.502898 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55zff"] Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.512149 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-55zff"] Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.525120 4770 scope.go:117] "RemoveContainer" containerID="987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.572173 4770 scope.go:117] "RemoveContainer" containerID="2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695" Dec 09 15:56:46 crc kubenswrapper[4770]: E1209 15:56:46.572546 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695\": container with ID starting with 2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695 not found: ID does not exist" containerID="2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.572586 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695"} err="failed to get container status \"2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695\": rpc error: code = NotFound desc = could not find container \"2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695\": container with ID starting with 2b77aa53ed11e93ca1a2375c26ceed2066834c9066c19420ed18d8cd5d14d695 not found: ID does not exist" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.572644 4770 scope.go:117] "RemoveContainer" containerID="1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108" Dec 09 15:56:46 crc kubenswrapper[4770]: E1209 15:56:46.574261 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108\": container with ID starting with 1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108 not found: ID does not exist" containerID="1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.574298 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108"} err="failed to get container status \"1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108\": rpc error: code = NotFound desc = could not find container \"1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108\": container with ID starting with 1789ea55f34d65c2b4309faa8fe965cf31e59cbc59cd7e976104e564fb83d108 not found: ID does not exist" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.574329 4770 scope.go:117] "RemoveContainer" containerID="987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08" Dec 09 15:56:46 crc kubenswrapper[4770]: E1209 15:56:46.574672 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08\": container with ID starting with 987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08 not found: ID does not exist" containerID="987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.574703 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08"} err="failed to get container status \"987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08\": rpc error: code = NotFound desc = could not find container \"987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08\": container with ID starting with 987a58d2116b3dac709d9111b18db79c98ee850b8ea05258ecf11c618ff1ec08 not found: ID does not exist" Dec 09 15:56:46 crc kubenswrapper[4770]: I1209 15:56:46.617823 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" path="/var/lib/kubelet/pods/935227fa-9e5c-4caa-9d3f-02187f596702/volumes" Dec 09 15:56:48 crc kubenswrapper[4770]: I1209 15:56:48.400184 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.191642 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmdhp"] Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.191937 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nmdhp" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerName="registry-server" containerID="cri-o://45bbeb76476c27901771d2f5b3cd60c955796d572c8d082f7656a453f9b901ab" gracePeriod=2 Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.515544 4770 generic.go:334] "Generic (PLEG): container finished" podID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerID="45bbeb76476c27901771d2f5b3cd60c955796d572c8d082f7656a453f9b901ab" exitCode=0 Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.515866 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmdhp" event={"ID":"3106d42e-85ab-4a60-93aa-8e08f7acc6a7","Type":"ContainerDied","Data":"45bbeb76476c27901771d2f5b3cd60c955796d572c8d082f7656a453f9b901ab"} Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.712225 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.723839 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-utilities\") pod \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.724104 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-catalog-content\") pod \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.724383 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7bms\" (UniqueName: \"kubernetes.io/projected/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-kube-api-access-b7bms\") pod \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\" (UID: \"3106d42e-85ab-4a60-93aa-8e08f7acc6a7\") " Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.724638 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-utilities" (OuterVolumeSpecName: "utilities") pod "3106d42e-85ab-4a60-93aa-8e08f7acc6a7" (UID: "3106d42e-85ab-4a60-93aa-8e08f7acc6a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.730253 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-kube-api-access-b7bms" (OuterVolumeSpecName: "kube-api-access-b7bms") pod "3106d42e-85ab-4a60-93aa-8e08f7acc6a7" (UID: "3106d42e-85ab-4a60-93aa-8e08f7acc6a7"). InnerVolumeSpecName "kube-api-access-b7bms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.746944 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3106d42e-85ab-4a60-93aa-8e08f7acc6a7" (UID: "3106d42e-85ab-4a60-93aa-8e08f7acc6a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.831773 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.831809 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:56:49 crc kubenswrapper[4770]: I1209 15:56:49.831821 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7bms\" (UniqueName: \"kubernetes.io/projected/3106d42e-85ab-4a60-93aa-8e08f7acc6a7-kube-api-access-b7bms\") on node \"crc\" DevicePath \"\"" Dec 09 15:56:50 crc kubenswrapper[4770]: I1209 15:56:50.528185 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmdhp" event={"ID":"3106d42e-85ab-4a60-93aa-8e08f7acc6a7","Type":"ContainerDied","Data":"8d1b0869089aa1818bcedcefeeaacc70ee1c5808d2a50d4c66ffda34953f8315"} Dec 09 15:56:50 crc kubenswrapper[4770]: I1209 15:56:50.528251 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmdhp" Dec 09 15:56:50 crc kubenswrapper[4770]: I1209 15:56:50.528293 4770 scope.go:117] "RemoveContainer" containerID="45bbeb76476c27901771d2f5b3cd60c955796d572c8d082f7656a453f9b901ab" Dec 09 15:56:50 crc kubenswrapper[4770]: I1209 15:56:50.551524 4770 scope.go:117] "RemoveContainer" containerID="d458249fa991d6d38db8ec3ef429a6c96836dc7f644ed73f919dce1d1167ef34" Dec 09 15:56:50 crc kubenswrapper[4770]: I1209 15:56:50.573792 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmdhp"] Dec 09 15:56:50 crc kubenswrapper[4770]: I1209 15:56:50.581625 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmdhp"] Dec 09 15:56:50 crc kubenswrapper[4770]: I1209 15:56:50.592319 4770 scope.go:117] "RemoveContainer" containerID="1b8cce4f6d51f04c2bdc87650757e3971fb6738c1b95c5063411b970766fde71" Dec 09 15:56:50 crc kubenswrapper[4770]: I1209 15:56:50.607523 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" path="/var/lib/kubelet/pods/3106d42e-85ab-4a60-93aa-8e08f7acc6a7/volumes" Dec 09 15:56:53 crc kubenswrapper[4770]: E1209 15:56:53.591743 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:56:53 crc kubenswrapper[4770]: E1209 15:56:53.591824 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:56:54 crc kubenswrapper[4770]: I1209 15:56:54.100227 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7f5c5648d4-9lc92_a562e446-afad-41e8-9169-41f2e14712a2/manager/0.log" Dec 09 15:56:54 crc kubenswrapper[4770]: I1209 15:56:54.100492 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7f5c5648d4-9lc92_a562e446-afad-41e8-9169-41f2e14712a2/kube-rbac-proxy/0.log" Dec 09 15:57:04 crc kubenswrapper[4770]: E1209 15:57:04.591329 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:57:08 crc kubenswrapper[4770]: E1209 15:57:08.596625 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:57:15 crc kubenswrapper[4770]: E1209 15:57:15.589512 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:57:21 crc kubenswrapper[4770]: E1209 15:57:21.590963 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:57:27 crc kubenswrapper[4770]: E1209 15:57:27.590945 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:57:35 crc kubenswrapper[4770]: E1209 15:57:35.591423 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:57:38 crc kubenswrapper[4770]: E1209 15:57:38.609884 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:57:44 crc kubenswrapper[4770]: I1209 15:57:44.244284 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:57:44 crc kubenswrapper[4770]: I1209 15:57:44.244926 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:57:46 crc kubenswrapper[4770]: E1209 15:57:46.603599 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:57:52 crc kubenswrapper[4770]: E1209 15:57:52.601066 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:57:59 crc kubenswrapper[4770]: E1209 15:57:59.590538 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:58:07 crc kubenswrapper[4770]: E1209 15:58:07.592666 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:58:10 crc kubenswrapper[4770]: E1209 15:58:10.591465 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:58:14 crc kubenswrapper[4770]: I1209 15:58:14.244147 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:58:14 crc kubenswrapper[4770]: I1209 15:58:14.245217 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:58:19 crc kubenswrapper[4770]: I1209 15:58:19.592171 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:58:19 crc kubenswrapper[4770]: E1209 15:58:19.722429 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:58:19 crc kubenswrapper[4770]: E1209 15:58:19.722506 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 15:58:19 crc kubenswrapper[4770]: E1209 15:58:19.722672 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:58:19 crc kubenswrapper[4770]: E1209 15:58:19.723924 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:58:24 crc kubenswrapper[4770]: E1209 15:58:24.591123 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:58:28 crc kubenswrapper[4770]: I1209 15:58:28.623927 4770 generic.go:334] "Generic (PLEG): container finished" podID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerID="cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc" exitCode=0 Dec 09 15:58:28 crc kubenswrapper[4770]: I1209 15:58:28.624028 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck9c5/must-gather-9fljd" event={"ID":"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd","Type":"ContainerDied","Data":"cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc"} Dec 09 15:58:28 crc kubenswrapper[4770]: I1209 15:58:28.625166 4770 scope.go:117] "RemoveContainer" containerID="cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc" Dec 09 15:58:29 crc kubenswrapper[4770]: I1209 15:58:29.624269 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ck9c5_must-gather-9fljd_f2e07f6f-2a28-494f-87a2-4c9cabd03ecd/gather/0.log" Dec 09 15:58:33 crc kubenswrapper[4770]: E1209 15:58:33.589935 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.164858 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ck9c5/must-gather-9fljd"] Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.165632 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ck9c5/must-gather-9fljd" podUID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerName="copy" containerID="cri-o://2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b" gracePeriod=2 Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.178462 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ck9c5/must-gather-9fljd"] Dec 09 15:58:37 crc kubenswrapper[4770]: E1209 15:58:37.590844 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.682548 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ck9c5_must-gather-9fljd_f2e07f6f-2a28-494f-87a2-4c9cabd03ecd/copy/0.log" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.683019 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.717329 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ck9c5_must-gather-9fljd_f2e07f6f-2a28-494f-87a2-4c9cabd03ecd/copy/0.log" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.718352 4770 generic.go:334] "Generic (PLEG): container finished" podID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerID="2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b" exitCode=143 Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.718454 4770 scope.go:117] "RemoveContainer" containerID="2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.718594 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck9c5/must-gather-9fljd" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.745694 4770 scope.go:117] "RemoveContainer" containerID="cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.799329 4770 scope.go:117] "RemoveContainer" containerID="2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b" Dec 09 15:58:37 crc kubenswrapper[4770]: E1209 15:58:37.799842 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b\": container with ID starting with 2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b not found: ID does not exist" containerID="2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.799895 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b"} err="failed to get container status \"2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b\": rpc error: code = NotFound desc = could not find container \"2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b\": container with ID starting with 2bacdb0bc68d259ae1fe9b1358d673c43b439da81d525e79151a39e171a3013b not found: ID does not exist" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.799923 4770 scope.go:117] "RemoveContainer" containerID="cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc" Dec 09 15:58:37 crc kubenswrapper[4770]: E1209 15:58:37.800280 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc\": container with ID starting with cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc not found: ID does not exist" containerID="cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.800335 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc"} err="failed to get container status \"cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc\": rpc error: code = NotFound desc = could not find container \"cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc\": container with ID starting with cb5bedf797fd233b1dd83f7de6b9277c6f9a993a05de90e522fdb7684b97e3dc not found: ID does not exist" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.803182 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpdt7\" (UniqueName: \"kubernetes.io/projected/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-kube-api-access-kpdt7\") pod \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\" (UID: \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\") " Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.803422 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-must-gather-output\") pod \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\" (UID: \"f2e07f6f-2a28-494f-87a2-4c9cabd03ecd\") " Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.813544 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-kube-api-access-kpdt7" (OuterVolumeSpecName: "kube-api-access-kpdt7") pod "f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" (UID: "f2e07f6f-2a28-494f-87a2-4c9cabd03ecd"). InnerVolumeSpecName "kube-api-access-kpdt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.906270 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpdt7\" (UniqueName: \"kubernetes.io/projected/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-kube-api-access-kpdt7\") on node \"crc\" DevicePath \"\"" Dec 09 15:58:37 crc kubenswrapper[4770]: I1209 15:58:37.962229 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" (UID: "f2e07f6f-2a28-494f-87a2-4c9cabd03ecd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 15:58:38 crc kubenswrapper[4770]: I1209 15:58:38.008780 4770 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 15:58:38 crc kubenswrapper[4770]: I1209 15:58:38.599651 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" path="/var/lib/kubelet/pods/f2e07f6f-2a28-494f-87a2-4c9cabd03ecd/volumes" Dec 09 15:58:44 crc kubenswrapper[4770]: I1209 15:58:44.243142 4770 patch_prober.go:28] interesting pod/machine-config-daemon-fbhnj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:58:44 crc kubenswrapper[4770]: I1209 15:58:44.243704 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:58:44 crc kubenswrapper[4770]: I1209 15:58:44.243844 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" Dec 09 15:58:44 crc kubenswrapper[4770]: I1209 15:58:44.244816 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb"} pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:58:44 crc kubenswrapper[4770]: I1209 15:58:44.244870 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" containerName="machine-config-daemon" containerID="cri-o://b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" gracePeriod=600 Dec 09 15:58:45 crc kubenswrapper[4770]: E1209 15:58:45.589626 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:58:46 crc kubenswrapper[4770]: E1209 15:58:46.193560 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:58:46 crc kubenswrapper[4770]: I1209 15:58:46.964750 4770 generic.go:334] "Generic (PLEG): container finished" podID="51498c5e-9a5a-426a-aac1-0da87076675a" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" exitCode=0 Dec 09 15:58:46 crc kubenswrapper[4770]: I1209 15:58:46.964853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerDied","Data":"b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb"} Dec 09 15:58:46 crc kubenswrapper[4770]: I1209 15:58:46.965046 4770 scope.go:117] "RemoveContainer" containerID="8e05582f6cc1c08a1110f5fd5979afe68a350503e91b64b764661d2248656855" Dec 09 15:58:46 crc kubenswrapper[4770]: I1209 15:58:46.967464 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 15:58:46 crc kubenswrapper[4770]: E1209 15:58:46.967925 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:58:49 crc kubenswrapper[4770]: E1209 15:58:49.739866 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:58:49 crc kubenswrapper[4770]: E1209 15:58:49.740415 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 15:58:49 crc kubenswrapper[4770]: E1209 15:58:49.740554 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 15:58:49 crc kubenswrapper[4770]: E1209 15:58:49.741821 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:58:56 crc kubenswrapper[4770]: E1209 15:58:56.590568 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:59:01 crc kubenswrapper[4770]: I1209 15:59:01.588529 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 15:59:01 crc kubenswrapper[4770]: E1209 15:59:01.589432 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:59:03 crc kubenswrapper[4770]: E1209 15:59:03.590828 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:59:08 crc kubenswrapper[4770]: E1209 15:59:08.597118 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:59:16 crc kubenswrapper[4770]: I1209 15:59:16.588527 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 15:59:16 crc kubenswrapper[4770]: E1209 15:59:16.589337 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:59:16 crc kubenswrapper[4770]: E1209 15:59:16.590215 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:59:19 crc kubenswrapper[4770]: E1209 15:59:19.590269 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:59:27 crc kubenswrapper[4770]: E1209 15:59:27.590424 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:59:29 crc kubenswrapper[4770]: I1209 15:59:29.116431 4770 scope.go:117] "RemoveContainer" containerID="a9cd541ef84da7b1cc78a796e9d13ad08049c1dd840d9b06eef1aaf81d041d90" Dec 09 15:59:31 crc kubenswrapper[4770]: I1209 15:59:31.589952 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 15:59:31 crc kubenswrapper[4770]: E1209 15:59:31.590828 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:59:31 crc kubenswrapper[4770]: E1209 15:59:31.592160 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:59:41 crc kubenswrapper[4770]: E1209 15:59:41.591284 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 15:59:42 crc kubenswrapper[4770]: I1209 15:59:42.588757 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 15:59:42 crc kubenswrapper[4770]: E1209 15:59:42.589426 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:59:42 crc kubenswrapper[4770]: E1209 15:59:42.590047 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:59:54 crc kubenswrapper[4770]: E1209 15:59:54.638869 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 15:59:55 crc kubenswrapper[4770]: I1209 15:59:55.588286 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 15:59:55 crc kubenswrapper[4770]: E1209 15:59:55.588587 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 15:59:55 crc kubenswrapper[4770]: E1209 15:59:55.590903 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.153942 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr"] Dec 09 16:00:00 crc kubenswrapper[4770]: E1209 16:00:00.157451 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerName="registry-server" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.157509 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerName="registry-server" Dec 09 16:00:00 crc kubenswrapper[4770]: E1209 16:00:00.157534 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="registry-server" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.157547 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="registry-server" Dec 09 16:00:00 crc kubenswrapper[4770]: E1209 16:00:00.157571 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerName="gather" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.157585 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerName="gather" Dec 09 16:00:00 crc kubenswrapper[4770]: E1209 16:00:00.157617 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="extract-content" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.157628 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="extract-content" Dec 09 16:00:00 crc kubenswrapper[4770]: E1209 16:00:00.157651 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerName="extract-utilities" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.157663 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerName="extract-utilities" Dec 09 16:00:00 crc kubenswrapper[4770]: E1209 16:00:00.157678 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="extract-utilities" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.157692 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="extract-utilities" Dec 09 16:00:00 crc kubenswrapper[4770]: E1209 16:00:00.157716 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerName="copy" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.157755 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerName="copy" Dec 09 16:00:00 crc kubenswrapper[4770]: E1209 16:00:00.157784 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerName="extract-content" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.157797 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerName="extract-content" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.158308 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerName="copy" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.158339 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3106d42e-85ab-4a60-93aa-8e08f7acc6a7" containerName="registry-server" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.158374 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e07f6f-2a28-494f-87a2-4c9cabd03ecd" containerName="gather" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.158397 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="935227fa-9e5c-4caa-9d3f-02187f596702" containerName="registry-server" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.159686 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.162414 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.162496 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.166970 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr"] Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.256436 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81159737-6537-4cd6-8637-35e7e0bc0038-secret-volume\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.256824 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8bf\" (UniqueName: \"kubernetes.io/projected/81159737-6537-4cd6-8637-35e7e0bc0038-kube-api-access-gp8bf\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.256974 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81159737-6537-4cd6-8637-35e7e0bc0038-config-volume\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.359341 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81159737-6537-4cd6-8637-35e7e0bc0038-secret-volume\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.359532 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp8bf\" (UniqueName: \"kubernetes.io/projected/81159737-6537-4cd6-8637-35e7e0bc0038-kube-api-access-gp8bf\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.359594 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81159737-6537-4cd6-8637-35e7e0bc0038-config-volume\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.360699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81159737-6537-4cd6-8637-35e7e0bc0038-config-volume\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.375697 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81159737-6537-4cd6-8637-35e7e0bc0038-secret-volume\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.379440 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp8bf\" (UniqueName: \"kubernetes.io/projected/81159737-6537-4cd6-8637-35e7e0bc0038-kube-api-access-gp8bf\") pod \"collect-profiles-29421600-z52qr\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.490537 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:00 crc kubenswrapper[4770]: I1209 16:00:00.943927 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr"] Dec 09 16:00:01 crc kubenswrapper[4770]: I1209 16:00:01.717136 4770 generic.go:334] "Generic (PLEG): container finished" podID="81159737-6537-4cd6-8637-35e7e0bc0038" containerID="068727bd61ace945de97a8ba051e46de2387482ca52bcbe0ca585b5c2f450356" exitCode=0 Dec 09 16:00:01 crc kubenswrapper[4770]: I1209 16:00:01.717454 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" event={"ID":"81159737-6537-4cd6-8637-35e7e0bc0038","Type":"ContainerDied","Data":"068727bd61ace945de97a8ba051e46de2387482ca52bcbe0ca585b5c2f450356"} Dec 09 16:00:01 crc kubenswrapper[4770]: I1209 16:00:01.717483 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" event={"ID":"81159737-6537-4cd6-8637-35e7e0bc0038","Type":"ContainerStarted","Data":"271966d4977e0bb60a9fe3b52f4c7c0321bf97225bff23245ca97d32b6d76cdb"} Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.128579 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.227498 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp8bf\" (UniqueName: \"kubernetes.io/projected/81159737-6537-4cd6-8637-35e7e0bc0038-kube-api-access-gp8bf\") pod \"81159737-6537-4cd6-8637-35e7e0bc0038\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.227971 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81159737-6537-4cd6-8637-35e7e0bc0038-secret-volume\") pod \"81159737-6537-4cd6-8637-35e7e0bc0038\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.228138 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81159737-6537-4cd6-8637-35e7e0bc0038-config-volume\") pod \"81159737-6537-4cd6-8637-35e7e0bc0038\" (UID: \"81159737-6537-4cd6-8637-35e7e0bc0038\") " Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.228763 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81159737-6537-4cd6-8637-35e7e0bc0038-config-volume" (OuterVolumeSpecName: "config-volume") pod "81159737-6537-4cd6-8637-35e7e0bc0038" (UID: "81159737-6537-4cd6-8637-35e7e0bc0038"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.233898 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81159737-6537-4cd6-8637-35e7e0bc0038-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81159737-6537-4cd6-8637-35e7e0bc0038" (UID: "81159737-6537-4cd6-8637-35e7e0bc0038"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.234032 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81159737-6537-4cd6-8637-35e7e0bc0038-kube-api-access-gp8bf" (OuterVolumeSpecName: "kube-api-access-gp8bf") pod "81159737-6537-4cd6-8637-35e7e0bc0038" (UID: "81159737-6537-4cd6-8637-35e7e0bc0038"). InnerVolumeSpecName "kube-api-access-gp8bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.330608 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81159737-6537-4cd6-8637-35e7e0bc0038-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.330640 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp8bf\" (UniqueName: \"kubernetes.io/projected/81159737-6537-4cd6-8637-35e7e0bc0038-kube-api-access-gp8bf\") on node \"crc\" DevicePath \"\"" Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.330650 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81159737-6537-4cd6-8637-35e7e0bc0038-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.739011 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" event={"ID":"81159737-6537-4cd6-8637-35e7e0bc0038","Type":"ContainerDied","Data":"271966d4977e0bb60a9fe3b52f4c7c0321bf97225bff23245ca97d32b6d76cdb"} Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.739054 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="271966d4977e0bb60a9fe3b52f4c7c0321bf97225bff23245ca97d32b6d76cdb" Dec 09 16:00:03 crc kubenswrapper[4770]: I1209 16:00:03.739063 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421600-z52qr" Dec 09 16:00:04 crc kubenswrapper[4770]: I1209 16:00:04.205604 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg"] Dec 09 16:00:04 crc kubenswrapper[4770]: I1209 16:00:04.228047 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421555-sdxmg"] Dec 09 16:00:04 crc kubenswrapper[4770]: I1209 16:00:04.601172 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e26c41-83c2-4332-be8e-b6581e866db1" path="/var/lib/kubelet/pods/68e26c41-83c2-4332-be8e-b6581e866db1/volumes" Dec 09 16:00:06 crc kubenswrapper[4770]: E1209 16:00:06.598039 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:00:07 crc kubenswrapper[4770]: I1209 16:00:07.589636 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:00:07 crc kubenswrapper[4770]: E1209 16:00:07.590019 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:00:09 crc kubenswrapper[4770]: E1209 16:00:09.590077 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:00:17 crc kubenswrapper[4770]: E1209 16:00:17.591870 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:00:19 crc kubenswrapper[4770]: I1209 16:00:19.588186 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:00:19 crc kubenswrapper[4770]: E1209 16:00:19.588715 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:00:23 crc kubenswrapper[4770]: E1209 16:00:23.590664 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:00:29 crc kubenswrapper[4770]: I1209 16:00:29.181295 4770 scope.go:117] "RemoveContainer" containerID="8f3af003fb91a041055cdea7720fd76629efbec55c22a29054b67c9075dda9ec" Dec 09 16:00:31 crc kubenswrapper[4770]: E1209 16:00:31.590383 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:00:33 crc kubenswrapper[4770]: I1209 16:00:33.588848 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:00:33 crc kubenswrapper[4770]: E1209 16:00:33.589766 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:00:36 crc kubenswrapper[4770]: E1209 16:00:36.591962 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:00:42 crc kubenswrapper[4770]: E1209 16:00:42.590886 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:00:45 crc kubenswrapper[4770]: I1209 16:00:45.589220 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:00:45 crc kubenswrapper[4770]: E1209 16:00:45.589901 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:00:49 crc kubenswrapper[4770]: E1209 16:00:49.595990 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:00:55 crc kubenswrapper[4770]: E1209 16:00:55.591597 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.199208 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29421601-4v8kg"] Dec 09 16:01:00 crc kubenswrapper[4770]: E1209 16:01:00.200191 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81159737-6537-4cd6-8637-35e7e0bc0038" containerName="collect-profiles" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.200204 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="81159737-6537-4cd6-8637-35e7e0bc0038" containerName="collect-profiles" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.200444 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="81159737-6537-4cd6-8637-35e7e0bc0038" containerName="collect-profiles" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.201336 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.210555 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421601-4v8kg"] Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.226060 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-combined-ca-bundle\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.226234 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-fernet-keys\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.226273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh24q\" (UniqueName: \"kubernetes.io/projected/c2e19419-69a9-4f64-8ec6-c3398b048b40-kube-api-access-wh24q\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.226397 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-config-data\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.328523 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-fernet-keys\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.328615 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh24q\" (UniqueName: \"kubernetes.io/projected/c2e19419-69a9-4f64-8ec6-c3398b048b40-kube-api-access-wh24q\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.328707 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-config-data\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.328837 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-combined-ca-bundle\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.334815 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-fernet-keys\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.335581 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-config-data\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.345435 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-combined-ca-bundle\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.356710 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh24q\" (UniqueName: \"kubernetes.io/projected/c2e19419-69a9-4f64-8ec6-c3398b048b40-kube-api-access-wh24q\") pod \"keystone-cron-29421601-4v8kg\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.528940 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.588487 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:01:00 crc kubenswrapper[4770]: E1209 16:01:00.589109 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:01:00 crc kubenswrapper[4770]: I1209 16:01:00.972274 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421601-4v8kg"] Dec 09 16:01:01 crc kubenswrapper[4770]: I1209 16:01:01.325173 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421601-4v8kg" event={"ID":"c2e19419-69a9-4f64-8ec6-c3398b048b40","Type":"ContainerStarted","Data":"1ec346ce68884b62fc99146bad74996a366b3131da185202a12f262d627007f9"} Dec 09 16:01:01 crc kubenswrapper[4770]: I1209 16:01:01.325228 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421601-4v8kg" event={"ID":"c2e19419-69a9-4f64-8ec6-c3398b048b40","Type":"ContainerStarted","Data":"ac9c49bbc274f9dd437a72638314244e44f4ab7a2a838041a2442535c6b80e2e"} Dec 09 16:01:01 crc kubenswrapper[4770]: I1209 16:01:01.348532 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29421601-4v8kg" podStartSLOduration=1.348476368 podStartE2EDuration="1.348476368s" podCreationTimestamp="2025-12-09 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:01:01.342304139 +0000 UTC m=+5893.238506295" watchObservedRunningTime="2025-12-09 16:01:01.348476368 +0000 UTC m=+5893.244678524" Dec 09 16:01:02 crc kubenswrapper[4770]: E1209 16:01:02.592424 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:01:04 crc kubenswrapper[4770]: I1209 16:01:04.353963 4770 generic.go:334] "Generic (PLEG): container finished" podID="c2e19419-69a9-4f64-8ec6-c3398b048b40" containerID="1ec346ce68884b62fc99146bad74996a366b3131da185202a12f262d627007f9" exitCode=0 Dec 09 16:01:04 crc kubenswrapper[4770]: I1209 16:01:04.354061 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421601-4v8kg" event={"ID":"c2e19419-69a9-4f64-8ec6-c3398b048b40","Type":"ContainerDied","Data":"1ec346ce68884b62fc99146bad74996a366b3131da185202a12f262d627007f9"} Dec 09 16:01:05 crc kubenswrapper[4770]: I1209 16:01:05.783938 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:05 crc kubenswrapper[4770]: I1209 16:01:05.975322 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-combined-ca-bundle\") pod \"c2e19419-69a9-4f64-8ec6-c3398b048b40\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " Dec 09 16:01:05 crc kubenswrapper[4770]: I1209 16:01:05.975712 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-fernet-keys\") pod \"c2e19419-69a9-4f64-8ec6-c3398b048b40\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " Dec 09 16:01:05 crc kubenswrapper[4770]: I1209 16:01:05.975886 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh24q\" (UniqueName: \"kubernetes.io/projected/c2e19419-69a9-4f64-8ec6-c3398b048b40-kube-api-access-wh24q\") pod \"c2e19419-69a9-4f64-8ec6-c3398b048b40\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " Dec 09 16:01:05 crc kubenswrapper[4770]: I1209 16:01:05.975990 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-config-data\") pod \"c2e19419-69a9-4f64-8ec6-c3398b048b40\" (UID: \"c2e19419-69a9-4f64-8ec6-c3398b048b40\") " Dec 09 16:01:05 crc kubenswrapper[4770]: I1209 16:01:05.981091 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e19419-69a9-4f64-8ec6-c3398b048b40-kube-api-access-wh24q" (OuterVolumeSpecName: "kube-api-access-wh24q") pod "c2e19419-69a9-4f64-8ec6-c3398b048b40" (UID: "c2e19419-69a9-4f64-8ec6-c3398b048b40"). InnerVolumeSpecName "kube-api-access-wh24q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:01:05 crc kubenswrapper[4770]: I1209 16:01:05.985050 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c2e19419-69a9-4f64-8ec6-c3398b048b40" (UID: "c2e19419-69a9-4f64-8ec6-c3398b048b40"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.004876 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2e19419-69a9-4f64-8ec6-c3398b048b40" (UID: "c2e19419-69a9-4f64-8ec6-c3398b048b40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.029525 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-config-data" (OuterVolumeSpecName: "config-data") pod "c2e19419-69a9-4f64-8ec6-c3398b048b40" (UID: "c2e19419-69a9-4f64-8ec6-c3398b048b40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.081435 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.081482 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.081498 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh24q\" (UniqueName: \"kubernetes.io/projected/c2e19419-69a9-4f64-8ec6-c3398b048b40-kube-api-access-wh24q\") on node \"crc\" DevicePath \"\"" Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.081510 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e19419-69a9-4f64-8ec6-c3398b048b40-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.374588 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421601-4v8kg" event={"ID":"c2e19419-69a9-4f64-8ec6-c3398b048b40","Type":"ContainerDied","Data":"ac9c49bbc274f9dd437a72638314244e44f4ab7a2a838041a2442535c6b80e2e"} Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.374628 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac9c49bbc274f9dd437a72638314244e44f4ab7a2a838041a2442535c6b80e2e" Dec 09 16:01:06 crc kubenswrapper[4770]: I1209 16:01:06.374636 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421601-4v8kg" Dec 09 16:01:07 crc kubenswrapper[4770]: E1209 16:01:07.591181 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:01:13 crc kubenswrapper[4770]: I1209 16:01:13.589318 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:01:13 crc kubenswrapper[4770]: E1209 16:01:13.590133 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:01:15 crc kubenswrapper[4770]: E1209 16:01:15.591069 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:01:22 crc kubenswrapper[4770]: E1209 16:01:22.590246 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:01:27 crc kubenswrapper[4770]: I1209 16:01:27.594464 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:01:27 crc kubenswrapper[4770]: E1209 16:01:27.598969 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:01:27 crc kubenswrapper[4770]: E1209 16:01:27.608157 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:01:34 crc kubenswrapper[4770]: E1209 16:01:34.591258 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:01:40 crc kubenswrapper[4770]: E1209 16:01:40.592913 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:01:41 crc kubenswrapper[4770]: I1209 16:01:41.590135 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:01:41 crc kubenswrapper[4770]: E1209 16:01:41.591099 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:01:48 crc kubenswrapper[4770]: E1209 16:01:48.600767 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:01:51 crc kubenswrapper[4770]: E1209 16:01:51.590900 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.602744 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zb8wq"] Dec 09 16:01:51 crc kubenswrapper[4770]: E1209 16:01:51.603393 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e19419-69a9-4f64-8ec6-c3398b048b40" containerName="keystone-cron" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.603409 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e19419-69a9-4f64-8ec6-c3398b048b40" containerName="keystone-cron" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.603608 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e19419-69a9-4f64-8ec6-c3398b048b40" containerName="keystone-cron" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.605391 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.642210 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zb8wq"] Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.802651 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-utilities\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.803130 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-catalog-content\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.803285 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rcxq\" (UniqueName: \"kubernetes.io/projected/ce652271-187e-4d07-acf2-a88d5c8f9fe9-kube-api-access-9rcxq\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.906464 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rcxq\" (UniqueName: \"kubernetes.io/projected/ce652271-187e-4d07-acf2-a88d5c8f9fe9-kube-api-access-9rcxq\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.906595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-utilities\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.906817 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-catalog-content\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.907415 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-utilities\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.907484 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-catalog-content\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:51 crc kubenswrapper[4770]: I1209 16:01:51.933257 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rcxq\" (UniqueName: \"kubernetes.io/projected/ce652271-187e-4d07-acf2-a88d5c8f9fe9-kube-api-access-9rcxq\") pod \"community-operators-zb8wq\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:52 crc kubenswrapper[4770]: I1209 16:01:52.225376 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:01:52 crc kubenswrapper[4770]: I1209 16:01:52.844767 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zb8wq"] Dec 09 16:01:53 crc kubenswrapper[4770]: I1209 16:01:53.588826 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:01:53 crc kubenswrapper[4770]: E1209 16:01:53.589380 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:01:53 crc kubenswrapper[4770]: I1209 16:01:53.863914 4770 generic.go:334] "Generic (PLEG): container finished" podID="ce652271-187e-4d07-acf2-a88d5c8f9fe9" containerID="b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b" exitCode=0 Dec 09 16:01:53 crc kubenswrapper[4770]: I1209 16:01:53.863960 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zb8wq" event={"ID":"ce652271-187e-4d07-acf2-a88d5c8f9fe9","Type":"ContainerDied","Data":"b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b"} Dec 09 16:01:53 crc kubenswrapper[4770]: I1209 16:01:53.864009 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zb8wq" event={"ID":"ce652271-187e-4d07-acf2-a88d5c8f9fe9","Type":"ContainerStarted","Data":"e56e4bfd910728d1b253a46e82b19ea0ca163e62dc1b6469026bd8eeea1d42a4"} Dec 09 16:01:55 crc kubenswrapper[4770]: I1209 16:01:55.891111 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zb8wq" event={"ID":"ce652271-187e-4d07-acf2-a88d5c8f9fe9","Type":"ContainerStarted","Data":"ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385"} Dec 09 16:01:56 crc kubenswrapper[4770]: I1209 16:01:56.903795 4770 generic.go:334] "Generic (PLEG): container finished" podID="ce652271-187e-4d07-acf2-a88d5c8f9fe9" containerID="ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385" exitCode=0 Dec 09 16:01:56 crc kubenswrapper[4770]: I1209 16:01:56.903903 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zb8wq" event={"ID":"ce652271-187e-4d07-acf2-a88d5c8f9fe9","Type":"ContainerDied","Data":"ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385"} Dec 09 16:01:57 crc kubenswrapper[4770]: I1209 16:01:57.917983 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zb8wq" event={"ID":"ce652271-187e-4d07-acf2-a88d5c8f9fe9","Type":"ContainerStarted","Data":"b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6"} Dec 09 16:01:57 crc kubenswrapper[4770]: I1209 16:01:57.940373 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zb8wq" podStartSLOduration=3.501409123 podStartE2EDuration="6.940351335s" podCreationTimestamp="2025-12-09 16:01:51 +0000 UTC" firstStartedPulling="2025-12-09 16:01:53.865499626 +0000 UTC m=+5945.761701762" lastFinishedPulling="2025-12-09 16:01:57.304441818 +0000 UTC m=+5949.200643974" observedRunningTime="2025-12-09 16:01:57.936919931 +0000 UTC m=+5949.833122077" watchObservedRunningTime="2025-12-09 16:01:57.940351335 +0000 UTC m=+5949.836553461" Dec 09 16:02:01 crc kubenswrapper[4770]: E1209 16:02:01.591117 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:02:02 crc kubenswrapper[4770]: I1209 16:02:02.227125 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:02:02 crc kubenswrapper[4770]: I1209 16:02:02.227515 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:02:02 crc kubenswrapper[4770]: I1209 16:02:02.292272 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:02:03 crc kubenswrapper[4770]: I1209 16:02:03.020028 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:02:03 crc kubenswrapper[4770]: I1209 16:02:03.078231 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zb8wq"] Dec 09 16:02:04 crc kubenswrapper[4770]: I1209 16:02:04.992145 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zb8wq" podUID="ce652271-187e-4d07-acf2-a88d5c8f9fe9" containerName="registry-server" containerID="cri-o://b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6" gracePeriod=2 Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.548679 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:02:05 crc kubenswrapper[4770]: E1209 16:02:05.594311 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.737859 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-utilities\") pod \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.737936 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-catalog-content\") pod \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.738070 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rcxq\" (UniqueName: \"kubernetes.io/projected/ce652271-187e-4d07-acf2-a88d5c8f9fe9-kube-api-access-9rcxq\") pod \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\" (UID: \"ce652271-187e-4d07-acf2-a88d5c8f9fe9\") " Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.740153 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-utilities" (OuterVolumeSpecName: "utilities") pod "ce652271-187e-4d07-acf2-a88d5c8f9fe9" (UID: "ce652271-187e-4d07-acf2-a88d5c8f9fe9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.745194 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce652271-187e-4d07-acf2-a88d5c8f9fe9-kube-api-access-9rcxq" (OuterVolumeSpecName: "kube-api-access-9rcxq") pod "ce652271-187e-4d07-acf2-a88d5c8f9fe9" (UID: "ce652271-187e-4d07-acf2-a88d5c8f9fe9"). InnerVolumeSpecName "kube-api-access-9rcxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.796850 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce652271-187e-4d07-acf2-a88d5c8f9fe9" (UID: "ce652271-187e-4d07-acf2-a88d5c8f9fe9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.840961 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.841258 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rcxq\" (UniqueName: \"kubernetes.io/projected/ce652271-187e-4d07-acf2-a88d5c8f9fe9-kube-api-access-9rcxq\") on node \"crc\" DevicePath \"\"" Dec 09 16:02:05 crc kubenswrapper[4770]: I1209 16:02:05.841343 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce652271-187e-4d07-acf2-a88d5c8f9fe9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.002829 4770 generic.go:334] "Generic (PLEG): container finished" podID="ce652271-187e-4d07-acf2-a88d5c8f9fe9" containerID="b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6" exitCode=0 Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.002880 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zb8wq" event={"ID":"ce652271-187e-4d07-acf2-a88d5c8f9fe9","Type":"ContainerDied","Data":"b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6"} Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.002926 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zb8wq" event={"ID":"ce652271-187e-4d07-acf2-a88d5c8f9fe9","Type":"ContainerDied","Data":"e56e4bfd910728d1b253a46e82b19ea0ca163e62dc1b6469026bd8eeea1d42a4"} Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.002950 4770 scope.go:117] "RemoveContainer" containerID="b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.004711 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zb8wq" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.030603 4770 scope.go:117] "RemoveContainer" containerID="ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.058006 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zb8wq"] Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.064778 4770 scope.go:117] "RemoveContainer" containerID="b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.068476 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zb8wq"] Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.121190 4770 scope.go:117] "RemoveContainer" containerID="b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6" Dec 09 16:02:06 crc kubenswrapper[4770]: E1209 16:02:06.121723 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6\": container with ID starting with b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6 not found: ID does not exist" containerID="b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.121772 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6"} err="failed to get container status \"b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6\": rpc error: code = NotFound desc = could not find container \"b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6\": container with ID starting with b0b10ea4615ed37e08b82df76d2f4f7b0f51535b5414c638aaef644da1b19ee6 not found: ID does not exist" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.121793 4770 scope.go:117] "RemoveContainer" containerID="ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385" Dec 09 16:02:06 crc kubenswrapper[4770]: E1209 16:02:06.122527 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385\": container with ID starting with ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385 not found: ID does not exist" containerID="ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.122551 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385"} err="failed to get container status \"ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385\": rpc error: code = NotFound desc = could not find container \"ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385\": container with ID starting with ebf93b9035fc1fd7ce9c52b56e9d30c5f50b1c2b2c71893a4ebe0d4bd6d92385 not found: ID does not exist" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.122567 4770 scope.go:117] "RemoveContainer" containerID="b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b" Dec 09 16:02:06 crc kubenswrapper[4770]: E1209 16:02:06.123054 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b\": container with ID starting with b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b not found: ID does not exist" containerID="b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.123240 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b"} err="failed to get container status \"b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b\": rpc error: code = NotFound desc = could not find container \"b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b\": container with ID starting with b3fb345a922c1f32bc629b974ab27b63c40e3aed702d5d301754c551219cec8b not found: ID does not exist" Dec 09 16:02:06 crc kubenswrapper[4770]: I1209 16:02:06.604849 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce652271-187e-4d07-acf2-a88d5c8f9fe9" path="/var/lib/kubelet/pods/ce652271-187e-4d07-acf2-a88d5c8f9fe9/volumes" Dec 09 16:02:08 crc kubenswrapper[4770]: I1209 16:02:08.594920 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:02:08 crc kubenswrapper[4770]: E1209 16:02:08.595546 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:02:13 crc kubenswrapper[4770]: E1209 16:02:13.590004 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:02:20 crc kubenswrapper[4770]: E1209 16:02:20.590970 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:02:23 crc kubenswrapper[4770]: I1209 16:02:23.588178 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:02:23 crc kubenswrapper[4770]: E1209 16:02:23.589008 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:02:27 crc kubenswrapper[4770]: E1209 16:02:27.592065 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:02:32 crc kubenswrapper[4770]: E1209 16:02:32.591308 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:02:36 crc kubenswrapper[4770]: I1209 16:02:36.589233 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:02:36 crc kubenswrapper[4770]: E1209 16:02:36.590228 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:02:40 crc kubenswrapper[4770]: E1209 16:02:40.594418 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:02:43 crc kubenswrapper[4770]: E1209 16:02:43.590899 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:02:47 crc kubenswrapper[4770]: I1209 16:02:47.588376 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:02:47 crc kubenswrapper[4770]: E1209 16:02:47.588914 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:02:54 crc kubenswrapper[4770]: E1209 16:02:54.592699 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:02:58 crc kubenswrapper[4770]: E1209 16:02:58.606423 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:03:01 crc kubenswrapper[4770]: I1209 16:03:01.589014 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:03:01 crc kubenswrapper[4770]: E1209 16:03:01.590231 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:03:07 crc kubenswrapper[4770]: E1209 16:03:07.591609 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:03:09 crc kubenswrapper[4770]: E1209 16:03:09.591684 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:03:15 crc kubenswrapper[4770]: I1209 16:03:15.589509 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:03:15 crc kubenswrapper[4770]: E1209 16:03:15.590346 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:03:21 crc kubenswrapper[4770]: E1209 16:03:21.591776 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:03:21 crc kubenswrapper[4770]: I1209 16:03:21.592095 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 16:03:21 crc kubenswrapper[4770]: E1209 16:03:21.715704 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 16:03:21 crc kubenswrapper[4770]: E1209 16:03:21.715787 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 16:03:21 crc kubenswrapper[4770]: E1209 16:03:21.715976 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6pmzg_openstack(73c9246e-4ec7-4b19-abcc-8df4fc43a74d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 16:03:21 crc kubenswrapper[4770]: E1209 16:03:21.717254 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:03:27 crc kubenswrapper[4770]: I1209 16:03:27.588359 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:03:27 crc kubenswrapper[4770]: E1209 16:03:27.590340 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:03:32 crc kubenswrapper[4770]: E1209 16:03:32.591293 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:03:34 crc kubenswrapper[4770]: E1209 16:03:34.595480 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:03:39 crc kubenswrapper[4770]: I1209 16:03:39.589894 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:03:39 crc kubenswrapper[4770]: E1209 16:03:39.591419 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fbhnj_openshift-machine-config-operator(51498c5e-9a5a-426a-aac1-0da87076675a)\"" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" podUID="51498c5e-9a5a-426a-aac1-0da87076675a" Dec 09 16:03:45 crc kubenswrapper[4770]: E1209 16:03:45.591276 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:03:47 crc kubenswrapper[4770]: E1209 16:03:47.591786 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:03:51 crc kubenswrapper[4770]: I1209 16:03:51.589948 4770 scope.go:117] "RemoveContainer" containerID="b85ad8f8de8a0f77514051afb58507d786ec4cde9b326a1f5875e650a348d3eb" Dec 09 16:03:52 crc kubenswrapper[4770]: I1209 16:03:52.343711 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fbhnj" event={"ID":"51498c5e-9a5a-426a-aac1-0da87076675a","Type":"ContainerStarted","Data":"9a5c5deb2c4738e94fdf006495ced0bb2819bb0c36c310febfb9394d6009d8ef"} Dec 09 16:03:58 crc kubenswrapper[4770]: E1209 16:03:58.599739 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:03:59 crc kubenswrapper[4770]: E1209 16:03:59.710139 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 16:03:59 crc kubenswrapper[4770]: E1209 16:03:59.710211 4770 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 16:03:59 crc kubenswrapper[4770]: E1209 16:03:59.710356 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5fh9dh5c9h674h98hfchddh9hb6h66ch565h55dh6hf4h59chb5hc9h59bh584h5fch677h85h547h5f7h59dh56bhf9h5bdh5d8hd7h564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4tpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46cbdc7f-5a87-4c97-a56e-910d75b00675): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 16:03:59 crc kubenswrapper[4770]: E1209 16:03:59.711580 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675" Dec 09 16:04:11 crc kubenswrapper[4770]: E1209 16:04:11.590778 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6pmzg" podUID="73c9246e-4ec7-4b19-abcc-8df4fc43a74d" Dec 09 16:04:14 crc kubenswrapper[4770]: E1209 16:04:14.592250 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="46cbdc7f-5a87-4c97-a56e-910d75b00675"